The AI Ethics Brief #48: Privacy Paradox, artifact affordances, the AI gambit, and more ...

Are we ready for robot art exhibitions? What does that even mean?

👋 Welcome to another edition of the Montreal AI Ethics Institute’s weekly AI Ethics Brief that will help you keep up with the fast-changing world of AI Ethics! Every week we summarize the best of AI Ethics research and reporting, along with some commentary. More about us at

This week’s Brief is a ~12-minute read.

Support our work through Substack

💖 To keep our content free for everyone, we ask those who can, to support us: become a paying subscriber for as little as $5/month. If you’d prefer to make a one-time donation, visit our donation page. We use this Wikipedia-style tipping model to support the democratization of AI ethics literacy and to ensure we can continue to serve our community.

*NOTE: When you hit the subscribe button, you may end up on the Substack page where if you’re not already signed into Substack, you’ll be asked to enter your email address again. Please do so and you’ll be directed to the page where you can purchase the subscription.

This week’s overview:

✍️ What we’re thinking:

  • How Artifacts Afford: The Power and Politics of Everyday Things

  • The importance of goal setting in product development to achieve Responsible AI

🔬 Research summaries:

  • Artificial Intelligence and the Privacy Paradox of Opportunity, Big Data and The Digital Universe

  • The AI Gambit – Leveraging Artificial Intelligence to Combat Climate Change

📰 Article summaries:

  • Myanmar: Facial Recognition System Threatens Rights (Human Rights Watch)

  • Google’s FLoC Is a Terrible Idea (Electronic Frontier Foundation)

  • A ‘splinternet’ won’t solve global cyber defense problems (C4ISRNET)

  • Robot artist Ai-da ready for her first art exhibition (TRT World)

But first, our call-to-action this week:

Tomorrow, attend The State of AI Ethics Panel at 12PM!

What's next for AI Ethics in 2021? And what is the broader historical significance of the mistreatment of Dr. Timnit Gebru by Google? Hear from a world-class panel, including:

  • Danielle Wood — Assistant Professor in the Program in Media Arts & Sciences, MIT (@space_enabled)

  • Katlyn M Turner — Research Scientist, MIT Media Lab (@katlynmturner)

  • Catherine D’Ignazio — Assistant Professor of Urban Science and Planning in the Department of Urban Studies and Planning, MIT (@kanarinka)

  • Victoria Heath (Moderator) — Associate Director of Governance & Strategy, Montreal AI Ethics Institute (@victoria_heath7)

  • Abhishek Gupta - Founder, Montreal AI Ethics Institute (@atg_abhishek)

📅 March 24th (Wednesday)
🕛12 PM - 1:30 PM EST
🎫 Get free tickets

Get free tickets

✍️ What we’re thinking:

From the Founder’s Desk:

The importance of goal setting in product development to achieve Responsible AI by Abhishek Gupta

" ... sometimes we get excited about new technology and jump head-first into taking it to find a problem that we can solve with our shiny new toy.

The lack of clarity on why a team is working on a project can have consequences for adherence to responsible AI principles. The exercise of goal setting helps us foreground the reasons for doing a certain project.

This is the first step in making sure that we can centre responsible AI principles in the project and inject that in the foundations."

To delve deeper, read the full article here.

The Sociology of AI Ethics:

How Artifacts Afford: The Power and Politics of Everyday Things

Davis’s book is an accessible and succinct treatment of technology as a journey through social power and politics and serves as an invaluable guidebook for tech analysts, designers and builders willing to venture on that path. Davis traces the idea of “affordance” through its intellectual history to show how, despite its multifarious meanings and (mis)uses, it remains a useful concept. She introduces the “mechanisms and conditions framework” of affordances which specifies how technologies reflect and shape human social behaviour, giving us a transferrable tool for critical analysis and intentional design.

To delve deeper, read the full summary here.

🔬 Research summaries:

Artificial Intelligence and the Privacy Paradox of Opportunity, Big Data and The Digital Universe

Thanks to the pandemic, internet connectivity increasing, and companies more efficiently sharing our data, even our most private data…isn’t. This paper explores data privacy in an AI-enabled world. Data awareness has increased since 2019, but the fear remains that Smith’s findings will stay too relevant for too long.

To delve deeper, read the full summary here.

The AI Gambit – Leveraging Artificial Intelligence to Combat Climate Change

2020’s record-breaking temperatures renewed the push for technological solutions in tackling climate change. In this report, the authors offer recommendations for policymakers in determining AI’s potential to direct climate action without exacerbating the technology’s own environmental impact.

To delve deeper, read the full summary here.

📰 Article summaries:

Myanmar: Facial Recognition System Threatens Rights (Human Rights Watch)

  1. What happened: Under the guise of a “Safe City” project, facial recognition technology has been deployed in 2 major cities in Myanmar. The technology was supplied by Huawei and other providers through CCTV infrastructure that is powered by AI capable of recognizing faces and license plates. 

  2. Why it matters: This is problematic because of the recent coup by the military junta and their suspension of several provisions stripping away fundamental rights in the country including the right to privacy. This will give the junta even more power to disperse any protests that might be organized further diminishing the rights of people. 

  3. Between the lines: Deployment of infrastructure without adequate public consultations and accountability mechanisms can lead to problems when control falls into the hands of malicious actors who can use something that was initially created with a different purpose to serve their own needs. With all the rapid approvals and deployments of similarly invasive technologies during COVID-19, we must pay attention and call out potential problems as early as possible to prevent situations like the one taking place in Myanmar right now.

Google’s FLoC Is a Terrible Idea (Electronic Frontier Foundation)

  1. What happened: In trying to move the world towards a more privacy-friendly posture, Google has proposed FLoC (Federated Learning of Cohorts) which is a mechanism that will get rid of third-party cookies from your browser (what advertisers use to track your web activity) and instead replace it with an identifier that you will share with other “similar” users. The similarity is calculated using the SimHash algorithm and the FLoC scheme asserts that you will be part of a cohort that isn’t too small to prevent you from being identified easily. 

  2. Why it matters: You’re still being tracked. FLoC is computed as a summary of your recent web browsing history and you share that with some “similar” users. For this to be useful to advertisers, it has to be able to provide some identification of you. This leaks information about you in the form of web activity that you most likely performed in the most recent timeframe. Since it groups you into cohorts, if you are part of a group like LGBTQ and don’t want to be profiled or discriminated against, you don’t have many options. Your recent behaviour summary is presented to all the advertisers who opt into this scheme. 

  3. Between the lines: What is evident from the GitHub page of the project is that with auxiliary information about the user (like when you log in to a website), correlating that with the FLoC ID will give even more information to the website that they can use to target you. In addition, since FLoC will use unsupervised learning to create the cohorts, it could end up aligning with racial categories. Google has said that it would rejig the groups if that starts to happen. How likely is that to happen? We’ll see.

A ‘splinternet’ won’t solve global cyber defense problems (C4ISRNET)

  1. What happened: Since 2019, at least 35 countries have placed bans on different apps that have threatened their conceptions of how they want to manage their regimes. The most recent incident comes with the swift ban that Clubhouse faced in China, and before that with TikTok in India a few months ago. Such a splintered internet exacerbates problems of global cybersecurity making it harder for countries to coordinate with each other. Unlike the physical world, the cyber world doesn’t have boundaries that can’t be scaled. 

  2. Why it matters: With the move of many more people to working remotely, cyber attacks have gained prominence since March 2020 and have led to significant breaches including the recent SolarWinds attack. Coordinating with other countries’ cyber defense centres (only very few countries have mature capabilities) and sharing signatures globally is going to be crucial if we’re to move more and more of our lives into the digital realm.

  3. Between the lines: Perhaps there is hesitancy on the part of various regimes that engage in this behaviour for 2 reasons. First that they want to continue to maintain tight control over “their” cyberspace without much care for splintering. Second, given that some of these countries have advanced cyber offensive and defensive capabilities, they want to remain superior for strategic reasons and don’t necessarily want to help others build up cyber capacity. 

Robot artist Ai-da ready for her first art exhibition (TRT World)

  1. What happened: A robot artist is being provided with a space to display 3 pieces at an art exhibition in May 2021 with the goal of helping people contemplate the relationship between humans and machines through self-portraits from an entity that doesn’t have a sense of “self”.

  2. Why it matters: Art is a powerful instrument for helping us examine issues that are ahead of our times. For example, what relationships might look like between humans and machines. And how machines — as they become more capable and gain more agency, can exhibit that in a way that is traditionally reserved for humans.

  3. Between the lines: Some worrying concerns: the anthropomorphization of the robot artist, giving it a gender, and having it speak alongside other human artists who have real artistic agency. We might be getting ahead of ourselves in how we portray the capabilities of existing AI systems.

From elsewhere on the web:

The Role Africa can play in AI Ethics

Our Partnerships Manager, Connor Wright will be speaking at this event!

Amongst the top 100 websites, 99 of them are commercial. Only one is designed to perform any kind of public service — Wikipedia. In much the same way, left to its own devices, the development of AI will embody predominantly commercial values. How can we ensure that AI is developed in service of the public good?

📅 Mar 26, 2021 (Friday)
🕛 5 AM – 7:00 AM EST
🎫 Get free tickets

Guest post:

If you’ve got an informed opinion on the impact of AI on society, consider writing a guest post for our community — just send your pitch to You can pitch us an idea before you write, or a completed draft.

In case you missed it:

Explaining and Harnessing Adversarial Examples

A bemusing weakness of many supervised machine learning (ML) models, including neural networks (NNs), are adversarial examples (AEs).  AEs are inputs generated by adding a small perturbation to a correctly-classified input, causing the model to misclassify the resulting AE with high confidence.  Goodfellow et al. propose a linear explanation of AEs, in which the vulnerability of ML models to AEs is considered a by-product of their linear behaviour and high-dimensional feature space.  In other words, small perturbations on an input can alter its classification because the change in NN activation (as result of the perturbation) scales with the size of the input vector.

Identifying ways to effectively handle AEs is of interest for problems like image classification, where the input consists of intensity data for many thousands of pixels.  A method of generating AEs called “fast gradient sign method” badly fools a maxout network, leading to a 89.4% error rate on a perturbed MNIST test set.  The authors propose an “adversarial training” scheme for NNs, in which an adversarial term is added to the loss function during training. 

This dramatically improves the error rate of the same maxout network to 17.4% on AEs generated by the fast gradient sign method. The linear interpretation of adversarial examples suggests an approach to adversarial training which improves a model’s ability to classify AEs, and helps interpret properties of AE classification which the previously proposed nonlinearity and overfitting hypotheses do not explain. 

To delve deeper, read the full summary here.

Take Action:


Register to watch our Learning Community seminars!

Want to audit our learning community seminars and be a fly on the wall? Here’s your chance: we’re live-streaming all of them via Zoom every Wednesday at 5PM, beginning this week. Come hang out and contribute to the discussion via chat.

📅 March 17th – April 28th (every Wednesday)
🕛 5 PM – 6:30 PM EST
🎫 Get free tickets