The AI Ethics Brief #42: Exposing AI, losing control of our faces, internet health, governance by algorithms and more ...

Do you trust a self-driving car to navigate piles of snow on the road?

Welcome to another edition of the Montreal AI Ethics Institute’s weekly AI Ethics Brief that will help you keep up with the fast-changing world of AI Ethics! Every week we summarize the best of AI Ethics research and reporting, along with some commentary. More about us at

Support our work through Substack

💖 To keep our content free for everyone, we ask those who can, to support us: become a paying subscriber for as little as $5/month. If you’d prefer to make a one-time donation, visit our donation page. We use this Wikipedia-style tipping model to support the democratization of AI ethics literacy and to ensure we can continue to serve our community.

*NOTE: When you hit the subscribe button, you may end up on the Substack page where if you’re not already signed into Substack, you’ll be asked to enter your email address again. Please do so and you’ll be directed to the page where you can purchase the subscription.

This week’s overview:

What we are thinking:

  • From the Founder’s Desk: Small steps to *actually* achieving Responsible AI

  • The Sociology of AI Ethics: Rethinking Gaming: The Ethical Work of Optimization in Web Search Engines (Research Summary)

Research summaries:

  • The Limits of Global Inclusion in AI Development

  • Governance by Algorithms

Article summaries:

  • Here’s a Way to Learn if Facial Recognition Systems Used Your Photos (NY Times)

  • The Future of Responsible Tech: Experts Share Their Lessons and Predictions for 2021 (Salesforce Blog)

  • Forget user experience. AI must focus on ‘citizen experience’ (VentureBeat)

  • Mozilla Took the Internet’s Vitals. And the Results Are Concerning. (OneZero)

  • This is how we lost control of our faces (MIT Tech Review)

  • Snow and Ice Pose a Vexing Obstacle for Self-Driving Cars (Wired)

But first, our call-to-action of the week:

The State of AI Ethics Report January 2021 was published last week and it captures the most relevant developments in research and reporting from the past quarter!

150+ pages that save you 150+ hours!

Get your copy now!

✍️ What we’re thinking:

From the Founder’s Desk:

Small steps to *actually* achieving Responsible AI by Abhishek Gupta

Responsible AI can seem overwhelming to achieve. I am with you on that. It comes with so many challenges that it is easy to get lost and feel disheartened in trying to get anything done. But, as they say that a journey of a thousand miles begins with the first step, I believe that there are some small steps that we can take in actually achieving Responsible AI in a realistic manner.

To delve deeper, read the full article here.

The Sociology of AI Ethics:

Rethinking Gaming: The Ethical Work of Optimization in Web Search Engines by Malte Ziewitz

Through ethnographic research, Ziewitz examines the “ethical work” of search engine optimization (SEO) consultants in the UK. Search engine operators, like Google, have guidelines on good and bad optimization techniques to dissuade users from “gaming the system” to keep their platform fair and profitable. Ziewitz concludes that when dealing with algorithmic systems that score and rank, users often find themselves in sites of moral ambiguity, navigating in the grey space between “good” and “bad” behavior. Ziewitz argues that designers, engineers, and policymakers would do well to move away from the simplistic idea of gaming the system, which assumes good and bad users, and focus instead on the ethical work that AI systems require of their users as an “integral feature” of interacting with AI-powered evaluative tools. 

To delve deeper, read the full summary here.

🔬 Research summaries:

The Limits of Global Inclusion in AI Development by Alan Chan, Chinasa T. Okolo, Zachary Terner, Angelina Wang

Western AI institutions have started to involve more diverse groups in the development and application of AI systems as a response to calls to level out the current global imbalances in the field. In this paper, the authors argue that increased representation can only go so far in redressing the global inequities in AI development and outline how to achieve broader inclusion and active participation in the field.

To delve deeper, read the full summary here.

Governance by Algorithms by Francesca Musiani

Through exploring the world of e-commerce and search engines, algorithms are no longer to be relegated to solely being about inputting and outputting data according to some specific calculations. Its intangibility, unquestionability, and influence over what we believe agency to be is explored in this paper, giving us the low-down on the influence such algorithms can and do, have in our lives.

To delve deeper, read the full summary here.

📰 Article summaries:

Here’s a Way to Learn if Facial Recognition Systems Used Your Photos (NY Times)

At the Montreal AI Ethics Institute, we are all about tools that help to empower people, and this article talks about a tool called Exposing AI that helps people determine if the pictures that they posted on Flickr were used to train facial recognition technology. While days of using Flickr might be largely behind us, perhaps some of the younger readers of this newsletter have never even heard of the tool, because of the permissive licenses on the images that were uploaded to the website, they continue to have large-scale implications for facial recognition technology today. Especially as they constitute training datasets in a lot of the modern-day systems, including those that have been used for surveillance in China as pointed out in the article. 

One of the things that I appreciated in the creation of this tool was that they took security seriously to avoid people using this as a querying tool to eke out information about individuals and performance on those individuals in terms of the deployed facial recognition technology systems. It only allows you to query for images that are already publicly available and those that have a URL pointer further decreasing the possibility that someone can just upload an image and run a query against the tool that they have created. 

The article points to how other large-scale datasets, some of which have been assembled illegally, like one from the University of Washington called MegaFace which was built in earnest to help researchers in the community and had an associated competition, might have been co-opted into something that now is being misused. This is characteristic of creating and making public datasets, especially in cases without consent, where we lose the ability to control who uses it in the future and how to control the distribution of it even if it is taken down, as was the case with MegaFace. Copies of that data still circulate online and are continuing to perhaps be used to fuel more surveillance applications.

The Future of Responsible Tech: Experts Share Their Lessons and Predictions for 2021 (Salesforce Blog)

Given that the Montreal AI Ethics Institute is firmly centered on community in all of its work, we appreciated the shout-out from Salesforce in this article mentioning the importance of community as a central pillar that will be responsible for the achievement of responsible AI, akin to what I had written a few weeks ago in my article titled: Why Civic Competence is Needed in AI Ethics. One of the points that came up again and again in this article shared by all the authors was the importance of addressing bias in its many forms that manifests in automated solutions. Especially, as technology deployment has accelerated because of the pandemic, such solutions have become widely deployed and have become pervasive in many parts of our lives. 

Design as another core consideration that will help to shape the future of technology and what it is able to achieve in the upliftment of people was another great insight in the article, something that gets talked about a little less than some of the other issues in the field. Finally, I also really appreciated the call-out that building responsible technology will be the responsibility of all of us and not just isolated actors calling out for justice. This is something that will be bolstered by the introduction of regulations that provide more of a mandate to undertake these activities and create a forcing function for companies to invest efforts into building and deploying responsible AI solutions.

Forget user experience. AI must focus on ‘citizen experience’ (VentureBeat)

The core argument raised in the article is that there isn’t enough attention paid to the needs and rights of citizens when designing and developing AI systems, which leads to misalignment in the expectations and harm inflicted on them by way of violating their rights. The article provides some basic ideas like utilizing a multidisciplinary team, integrating these considerations throughout the lifecycle, co-developing solutions with those who are going to be impacted by the systems, and most of all really placing citizens at the center of the design process rather than thinking of them as yet another data point that feeds your machine. 

The idea of co-development, participatory design is something that a former researcher at the Montreal AI Ethics Institute and I worked on in proposing better ways of design contact-tracing apps at the start of the COVID-19 pandemic. Fundamental ideas expressed in this article also talk about the need to have more integration of concerns around the legal validity and compliance with best practices that limit the negative impacts of an application. A point that really caught my attention which digs deeper into the idea of citizen experience is that of education of the people so that they know what the real capabilities and limitations of the systems are so that they might protect themselves. 

My piece on civic competence as a core pillar for AI ethics touches on that idea and reinforces the notion that we need to have better educational efforts, something that the Montreal AI Ethics Institute does through its learning communities, workshops, and other initiatives including the State of AI Ethics Reports. Ultimately, I believe that we need a multipronged approach if we are to get to responsible AI in practice, and this is one of those methods that can help move the needle.

Mozilla Took the Internet’s Vitals. And the Results Are Concerning. (OneZero)

Always an intriguing read from the folks over at Mozilla, the Internet Health Report is a good pulse check on everything that is going right and wrong with the internet and how we interact with each other over it. With some unsurprising results, namely that more people have started using the internet than the previous year, aided in part by the pandemic. The concentration of technology development and use was also held by a small set of companies, mostly American and a couple of Chinese companies. In addition, their influence spreads beyond the direct assets they own since they provide the fundamental building blocks like cloud computing that is used by other services that are increasingly becoming an essential part of our lives. 

The report also pointed out how the distribution of such access is highly uneven and subject to a lot of surveillance and control, especially noting how there was an Internet Shutdown on each day of the year, sometimes those that lasted weeks or months that allow human rights violations to occur unchecked and unexamined. The mass deployment of technologies like facial recognition further risk the rights of people and there are a lot of activists both inside and outside companies that are opposing such rollouts from taking place. 

Ending the report on a more positive note, the Mozilla Foundation gives the internet a positive prognosis based on the fact that we have more people rising up to assert their rights and powers. For example, the workers doing gig work to demand more basic protections like sick days, and other workers’ benefits that are offered to those who hold full-time employment. It comes down to collective action and perhaps the more aware we become of some of these issues, the more we will be able to articulate our demands so that we have a healthier ecosystem for all.

This is how we lost control of our faces (MIT Tech Review)

An article that talks about a comprehensive study recently published on the datasets that go into the making of facial recognition technology point out the devolution of the health of the ecosystem in terms of consent and respect for the privacy of individuals. The authors of the paper argue that in the early days of facial recognition technology, data was collected and annotated through highly manual and labor-intensive methods which made data collection quite limited and obtaining consent the norm. But, as more people realized the potential that this technology offered, they sought to overcome the limitations of the techniques at the time by utilizing larger and larger datasets. 

At first, larger datasets were also manually collected but to fully leverage the advantages that deep learning systems offered, the researchers needed to move beyond the limits imposed by human collection methods and relied on automated scraping of web data to find more faces. This inadvertently sucked up photos of minors and of course consent wasn’t a major concern as the researchers were focused on improving the performance of the system. Datasets like Labeled Faces in the Wild among others contain pictures of people who have not necessarily consented to all the uses of that data. As mentioned in the opening article for this newsletter, consumers are more sophisticated now; they want to know how their data is being used, and they unapologetically call out companies when they’re collecting and using user data without proper consent. 

Yet, as we know with data on the internet, once it is out there, we have very little control in terms of how it might be used. So, prevention and proper consent mechanisms are going to be key if we’re to achieve an environment where the rights of people are placed first and foremost over research interests and commercial applications.

Snow and Ice Pose a Vexing Obstacle for Self-Driving Cars (Wired)

As a Canadian, I’m well-versed with snow in all its fun, wintry glory and the challenges that come with it, be that slush on the sidewalk or more serious concerns like snow blanketing the road and making driving really hard. As the excitement around self-driving cars took a bit of a tempered approach last year, it shouldn’t come as a surprise then most of the tests that have been done thus far are for the cars operating in sunny and nice-weather conditions. What does this mean for some of us who inhabit more wintry terrains? 

As demonstrated by the dataset collection undertaken by one of the researchers interviewed in this article, self-driving cars struggle in places where there is inclement weather upping the risks for misidentifying vehicles that are covered in snow on the sidewalks and pedestrians who look like little balls with all their winter clothing on. Such disparities in the ability of self-driving cars pose challenges in terms of where they might be deployable and also point to the tremendous amount of work that remains to be done before they have widespread use. 

Publicly available datasets are going to be one of the key instruments in improving the state of the art in these systems, and we require not only funding but other incentives as well that encourage researchers to collect and annotate such data and to also encourage industry players to make their data more widely available so that all players can work together in putting forth vehicles that are safer and robust in challenging weather conditions as well.

From elsewhere on the web:

Becoming Authors of the AI Story (TEDxYouth@GandyStreet)

Connor Wright, our Partnerships Manager, gave this talk for TEDxYouth @ GandyStreet detailing why we’re such big believers in civic competence and equipping our community to take action. “If we as the public do not get involved in the AI debate, we will become passive characters in the AI novel currently being designed for us.”

Firings, reorgs and flaring tensions: Inside the last two months of turmoil at Google's AI division (Business Insider)

This article features comments from our founder Abhishek Gupta about the latest at Google. “Google's commitment to building ethical AI has been thrown into question as tensions between employees and management continue to flare.“

Role of AI and Emerging Technologies in Achieving Reduced Inequality (AI Policy Labs)

Our founder Abhishek Gupta and former researcher Allison Cohen join Krutika Choudhury for episode 10 of AI Policy Labs’ AIforSDG​ podcast to discuss how AI can help close inequality gaps.

In case you missed it:

PolicyKit: Building Governance in Online Communities by Amy X. Zhang, Grant Hugh, Michael S. Bernstein

Online communities have various forms of governance, which generally include a permission-based model. Although these governance models do work in theory, these models caused the admin/moderator to burn out, lack legitimacy of the platform, and the governance model itself cannot evolve. The need for alternative governance models for online communities is necessary. Different types of governance models were tested on other platforms such as LambdaMOO – which shifted from a dictatorship (governed by “wizards”) to a petition model that involved voting and wizards implemented the outcome of the votes.

Wikipedia and its openness for multiple contributors also faced a series of conflicts such as processing petitions and voting to solve disputes. But again, it was left to the admins to address these issues, a very manual and labor-intensive process. This report aims to present “PolicyKit” as a software platform that allows and empowers online communities to “concisely author” governance procedures on their home platforms. The analysis of PolicyKit is done based on being able to carry out actions and policies such as random jury deliberation, and a multistage caucus.

To delve deeper, read the full summary here.

Take Action:


The state of AI ethics in Spain and Canada (El estado de la ética IA en España y Canadá)

We’re partnering with OdiseIA to consult the public about the state of AI ethics in Canada and in Spain. The discussion will span across topics including country-specific regulations, commonalities across both countries, and the type of federal policies that will be needed to move the needle.

📅 February 26th (Friday)
🕛12 PM - 1:30 PM EST
🎫 Get tickets