AI Ethics #3 : Privacy face masks, ML Fairness, Deepfakes, Franken-algorithms and more ...
Our third weekly edition covering research and news in the world of AI Ethics
Welcome to the third edition of our weekly newsletter that will help you navigate the fast changing world of AI Ethics! Every week we dive into research papers that caught our eye, sharing a summary of those with you and presenting our thoughts on how it links with other work in the research landscape. We also share brief thoughts on interesting articles and developments in the field. More about us on: https://montrealethics.ai/about/
If someone has forwarded this to you and you want to get one delivered to you every week, you can subscribe to receive this newsletter by clicking below
The pandemic continues to wreak havoc on our lives, the best that we can do is to listen to the advice from our health agencies and maintain some sort of normalcy with respect to our work as much as possible, wishing us all the best in fighting this. And now onto the content …
Our contributions to the Office of Privacy Commissioner of Canada’s consultation
Our team spent many hours working to put together technical and legal recommendations for the OPCC consultation, it also includes the feedback we gathered from our community leveraging a great mix of diversity of backgrounds. Here’s a brief summary and link to the entire report.
In February 2020, the Montreal AI Ethics Institute (MAIEI) was invited by the Office of the Privacy Commissioner of Canada (OPCC) to provide for comments both at a closed roundtable and in writing on the OPCC consultation proposal for amendments relative to Artificial Intelligence (AI), to the Canadian privacy legislation, the Personal Information Protection and Electronic Documents Act (PIPEDA).
The present document includes MAIEI comments and recommendations in writing. Per MAIEI’s mission and mandate to act as a catalyst for public feedback pertaining to AI Ethics and regulatory technology developments, as well as to provide for public competence-building workshops on critical topics in such domains, the reader will also find such public feedback and propositions by Montrealers who participated at MAIEI’s workshops, submitted as Schedule 1 to the present report. For each of OPCC 12 proposals, and underlying questions, as described on its website, MAIEI provides a short reply, a summary list of recommendations, as well as comments relevant to the question at hand.
We leave you with three general statements to keep in mind while going through the next pages:
1) AI systems should be used to augment human capacity for meaningful and purposeful connections and associations, not as a substitute for trust.
2) Humans have collectively accepted to uphold the rule of law, but for machines, the code is rule. Where socio-technical systems are deployed to make important decisions, profiles or inferences about individuals, we will increasingly have to attempt the difficult exercise of drafting and encoding our law in a manner learnable by machines.
3) Let us work collectively towards a world where Responsible AI becomes the rule, before our socio-technical systems become “too connected to fail” .
You can read the entire report here.
Research:
Let's look at some highlights of research papers that caught our attention at MAIEI:
The Deepfake Detection Challenge: Insights and Recommendations for AI and Media Integrity by Partnership on AI
Accurate and trustworthy information plays a key role to get a well functioning society. When that is compromised by synthetic and non-synthetic actors polluting media streams with disinformation to the point of confounding public discourse on matters of importance, we must deploy mechanisms to regain control. This has come in sharp focus with the ongoing pandemic where every person is flush with loads of information but has difficulty ascertaining what is true and what isn't. In an increasingly automated world, we'll see further use of Deepfake technology to push malicious content onto people and the Deepfake Detection Challenge is meant to surface effective technical solutions to mitigate this problem. The Partnership on AI via their AI Media Integrity team compiled a helpful list of recommendations to improve the challenge but the lessons also apply widely to any others doing work in this space. At MAIEI, we have our own work ongoing with a team of professionals spanning UX design, machine learning, inclusive design, educational program development, media psychology and industry that is helping to address challenges in the field. The document from PAI highlights the importance of building multi-stakeholder teams that have worked extensively with each other to improve the efficacy of the mechanisms so developed for media integrity. It also provides insights on how to construct datasets, how to put up scoring rules, responsible publishing practices and more that position efforts in the domain to succeed. The guidelines provided are actionable and encourage a mixed methods approach of combining technology with existing practices by journalists and fact checkers. It also advocates for teams building these solutions to make them available and accessible to the wider ecosystem such that we achieve defense in depth via the deployment of these tools at various points in the content lifecycle.
To delve deeper, read our full summary here.
Robot Rights? Let’s Talk about Human Welfare instead by Abeba Birhane and Jelle van Dijk
This paper highlights a quintessential dilemma that often pops up when entrants to the domain of AI ethics begin learning about the field. They stumble upon articles that tout the importance of treating robots like humans and giving them rights akin to humans. What it often ignores is the degree to which completely sentient machines are a pipe dream in the near and medium term and based on current estimates by reputable technical scientists in the field, a fantasy even in the long run. So why then the focus on them? The paper dives into where this stems from and why this is deeply problematic. The premise centers on the idea that given present levels of technology and their impacts on humans today, we ignore or divert away resources and attention from addressing concerns about how AI systems disproportionately impact the marginalized and focus instead on problems of an imaginary future scenario. Through numerous examples, the authors illustrate how today's machine learning systems have a great deal of human input behind them and are essentially human-machine systems where there is a class of workers who operate in the shadows to enable the wonders of automated technology. Their pervasive impacts and problems with bias and fairness, entrenching existing stereotypes and creating further disadvantages for the most vulnerable, they need scrutiny and analysis before they are made an invisible part of our everyday social fabric. Robots today, even in social contexts where they might appear to be warm and humans might cherish them such as care robots hold nothing more than the same significance as objects that one relishes like a nice espresso machine. Attributing agency and autonomy to such systems beyond their capabilities and thus asking us to think about rights that they deserve is putting the cart way before the horse. True AI ethics should concern itself with mitigating real harms to real humans that they are experiencing and meaningfully balancing that against efforts devoted towards potential problems in a distant future.
To delve deeper, read our full summary here.
Machine Learning Fairness : Lessons Learned Google I/O by Tulsee Doshi and Jacqueline Pan from the Google ML Fairness team
When we think about fairness in ML systems, we usually focus a lot on data and not as much on the other pieces of the pipeline. This talk provides some illustrative examples from the Google Fairness in ML team on how to look at the process of making ML systems fairer by looking at design, data, and measurement and modeling. Motivated by some commonplace examples on how skewed underlying data can lead to significant harms to people, one of the examples talks about how it was only post-2011 that female crash dummies were used prior to which women had higher rates of injuries in car crashes because their body types were not included in crash testing by automotive firms. Numerous services provided by Google, including Jigsaw, which helps detect the toxicity level of a piece of text, had flaws that were initially not caught by the team but emerged post deployment when users pointed out how results had biases towards common stereotypes. A great design example was on the subject of how band-aids were until quite recently made in just one color and didn't serve well the needs of people with darker skin tones. The importance of measurement and modeling was made clear through examples that highlighted how the process of creating and tracking fairness metrics help to monitor the system over time and provide intelligence to the teams on which areas they can do better on. The lessons learned by the team from various deployments are grouped under the categories of Fairness by data, Fairness by design and Fairness by measurement and modeling. The lessons are a mix of aspirational and actionable steps that aid on the ground development and design teams grapple with the challenges of translating abstract principles into concrete steps and provide a neat framework. This talk was followed up in the end of 2019 with the launch of a fairness indicators toolset that integrates into TFX and other frameworks that can be utilized to achieve some of the actions as mentioned in the lessons.
To delve deeper, read our full summary here.
Articles:
Let’s look at highlights of some recent articles that we found interesting at MAIEI:
Photo by Vladimir Palyanov on Unsplash
With painted faces, artists fight facial recognition tech
London is amongst a few cities that has seen public deployment of live facial recognition technology by law enforcement with the aim of increasing public safety. But, more often than not, it is done so without public announcement and an explanation as to how this technology works, and what impacts it will have on people’s privacy. As discussed in an article by MAIEI on smart cities, such a lack of transparency erodes public trust and affects how people go about their daily lives. Several artists in London as a part of regaining control over their privacy and to raise awareness are using the technique of painting adversarial patterns on their faces to confound facial recognition systems. They employ highly contrasting colors to mask the highlights and shadows on their faces and practice pattern use as created and disseminated by the CVDazzle project that advocates for many different styles to give the more fashion-conscious among us the right way to express ourselves while preserving our privacy. Such projects showcase a rising awareness for the negative consequences of AI-enabled systems and also how people can use creative solutions to combat problems where laws and regulations fail them.
AI Is Coming for Your Most Mind-Numbing Office Tasks
As discussed in our last newsletter, the labor impacts of AI require nuance in discussion rather than fear-mongering that veers between over-hyping and downplaying concerns when the truth lies somewhere in the middle. In the current paradigm of supervised machine learning, AI systems need a lot of data before becoming effective at their automation tasks. The bottom rung of this ladder consists of robotic process automation that merely tracks how humans perform a task (say, by tracking the clicks of humans as they go about their work) and ape them step by step for simple tasks like copying and pasting data across different places. The article gives an example of an organization that was able to minimize churn in their employees by more than half because of a reduction in data drudgery tasks like copying and pasting data across different systems to meet legal and compliance obligations. Economists point out that white-collar jobs like these and those that are middle-tier in terms of skills that require little training are at the highest risk of automation. While we’re still ways away from AI taking up all jobs, there is a slow march starting from automating the most menial tasks, potentially freeing us up to do more value-added work.
We Asked an A.I. to Write a Column for Us. The Results Were Wild
What happens when AI starts to take over the more creative domains of human endeavour? Are we ready for a future where our last bastion, the creative pursuit, against the rise of machines is violently snatched away from us? In a fitting start to feeling bereft in the times of global turmoil, this article starts off with a story created by a machine learning model called GPT-2 that utilizes training data from more than 8 million documents online and predicts iteratively the next word in a sentence given a prompt. The story is about “Life in the Time of Coronavirus” that paints a desolate and isolating picture of a parent who is following his daily routine and feels different because of all the changes happening around them. While the short story takes weird turns and is not completely coherent, it does give an eerie feeling that blurs the line between what could be perceived as something written by a human compared to that by a machine. A news-styled article on the use of facial recognition systems for law enforcement sounds very believable if presented outside of the context of the article. The final story, a fictional narrative, presents a fractured, jumpy storyline of a girl with a box that has hallucinatory tones to its storytelling. The range of examples from this system is impressive but it also highlights how much further these systems have to go before they can credibly take over jobs. That said, there is potential to spread disinformation via snippets like the second example we mention and hence, something to keep in mind as you read things online.
AI is an Ideology, Not a Technology
In this insightful op-ed, two pioneers in technology shed light on how to think about AI systems and their relation to the existing power and social structures. Borrowing the last line in the piece, “ … all that is necessary for the triumph of an AI-driven, automation-based dystopia is that liberal democracy accept it as inevitable.”, aptly captures the current mindset surrounding AI systems and how they are discussed in the Western world. TV shows like Black Mirror perpetuate narratives showcasing the magical power of AI-enabled systems, hiding the fact that there are millions, if not billions of hours of human labor that undergird the success of modern AI systems, which largely fall under the supervised learning paradigm that requires massive amounts of data to work well. The Chinese ecosystem is a bit more transparent in the sense that the shadow industry of data labellers is known, and workers are compensated for their efforts. This makes them a part of the development lifecycle of AI while sharing economic value with people other than the tech-elite directly developing AI. On the other hand, in the West, we see that such efforts go largely unrewarded because we trade in that effort of data production for free services. The authors give the example of Audrey Tang and Taiwan where citizens have formed a data cooperative and have greater control over how their data is used. Contrasting that, we have highly-valued search engines standing over community-run efforts like Wikipedia which create the actual value for the search results, given that a lot of the highly placed search results come from Wikipedia. Ultimately, this gives us some food for thought as to how we portray AI today and its relation to society and why it doesn’t necessarily have to be that way.
The Unnatural Ethics of AI Could Be Its Undoing
The Trolley Problem is a widely touted ethical and moral dilemma wherein a person is asked to make a split-second choice to save one or more than one life based on a series of scenarios where the people that need to be saved have different characteristics including their jobs, age, gender, race, etc. In recent times, with the imminent arrival of self-driving cars, people have used this problem to highlight the supposed ethical dilemma that the vehicle system might have to grapple with as it drives around. This article makes a point about the facetious nature of this thought experiment as an introduction to ethics for people that will be building and operating such autonomous systems. The primary argument being that it's a contrived situation that is unlikely to arise in the real-world setting and it distracts from other more pressing concerns in AI systems. Moral judgments are relativistic and depend on cultural values of the geography where the system is deployed. The Nature paper cited in the article showcases the differences in how people respond to this dilemma. There is an eeriness to this whole experimental setup, the article gives some examples on how increasingly automated environments, devoid of human social interactions and language, are replete with clanging and humming of machines that give an entirely inhuman experience. For most systems, they are going to be a reflection of the biases and stereotypes that we have in the world, captured in the system because of the training and development paradigms of AI systems today. We'd need to make changes and bring in diversity to the development process, creating awareness of ethical concerns, but the Trolley Problem isn't the most effective way to get started on it.
Franken-algorithms: The Deadly Consequences of Unpredictable Code
Mary Shelly had created an enduring fiction which, unbeknownst to her, has today manifested itself in the digital realm with layered abstractions of algorithms that are increasingly running multiple aspects of our lives. The article dives into the world of black box systems that have become opaque to analysis because of their stratified complexity leading to situations with unpredictable outcomes. This was exemplified when an autonomous vehicle crashed into a crossing pedestrian and it took months of post-hoc analysis to figure out what went wrong. When we talk about intelligence in the case of these machines, we're using it in a very loose sense, like the term “friend” on Facebook, which has a range of interpretations from your best friend to a random acquaintance. Both terms convey a greater sense of meaning than is actually true. When such systems run amok, they have the potential to cause significant harm, case in point being the flash crashes the financial markets experienced because of the competitive behaviour of high frequency trading firm algorithms facing off against each other in the market. Something similar has happened on Amazon where items get priced in an unrealistic fashion because of buying and pricing patterns triggered by automated systems. While in a micro context the algorithms and their working are transparent and explainable, when they come together in an ecosystem, like finance, they lead to an emergent complexity that has behaviour that can't be predicted ahead of time with a great amount of certainty. But, such justifications can't be used as a cover for evading responsibility when it comes to mitigating harms. Existing laws need to be refined and amended so that they can better meet the demands of new technology where allocation of responsibility is a fuzzy concept.
From the archives:
Here’s an article from our blogs that we think is worth another look:
Autonomous Vehicles, Social Robots, and the Mitigation of Risk: A New Consequentialist Approach by Anthony De Luca-Baratta (Philosophy & Political Science, McGill University)
This paper explores how traditional consequentialism falls short in scenarios involving autonomous vehicles and social robots. A modified version dubbed ‘risk consequentialism’ is put forward for consideration as an approach that can help guide policy decisions in grave risk scenarios while accommodating a high-uncertainty future.
Guest contributions:
We invite researchers and practitioners working in different domains studying the impacts of AI-enabled systems to share their work with the larger AI ethics community, here’s this week’s featured post:
A 16-year old AI developer’s critical take on AI ethics
This guest post was contributed by Arnav Paruthi, a 16 year old student diving deep into the world’s biggest problems with the help of world class guidance from The Knowledge Society (TKS).
If you’re working on something interesting and would like to share that with our community, please email us at support@montrealethics.ai
Events:
As a part of our public competence building efforts, we host events frequently spanning different subjects as it relates to building responsible AI systems, you can see a complete list here: https://montrealethics.ai/meetup
Given the advice from various health agencies, we’re avoiding physical events to curb the spread of COVID-19. Stay tuned for updates!
From elsewhere on the web:
Things from our network and more that we found interesting and worth your time.
One of our favourite newsletters in AI is produced by Jack Clark called ImportAI that does a deep dive into the latest AI research happening in the field along with a bonus gem of futuristic fictional narratives on how AI might be used.
RE-WORK has compiled a helpful list of AI resources to keep you busy and informed as we all #stayhome (if possible) and fight this pandemic together, take a look here.
Signing off for this week, we look forward to it again in a week! If you enjoyed this and know someone else that can benefit from this newsletter, please share it with them!
If you have feedback for this newsletter or think there is an interesting piece of research, development or event that we missed, please feel free to email us at support@montrealethics.ai
If someone has forwarded this to you and you like what you read, you can subscribe to receive this weekly newsletter by clicking below