The AI Ethics Brief #58: Energy and policy in NLP, AI and music, Twitter AI cropping, and more ...
How can we build resiliency in AI systems?
Welcome to another edition of the Montreal AI Ethics Institute’s weekly AI Ethics Brief that will help you keep up with the fast-changing world of AI Ethics! Every week we summarize the best of AI Ethics research and reporting, along with some commentary. More about us at montrealethics.ai/about.
⏰ This week’s Brief is a ~12-minute read.
Support our work through Substack
💖 To keep our content free for everyone, we ask those who can, to support us: become a paying subscriber for as little as $5/month. If you’d prefer to make a one-time donation, visit our donation page. We use this Wikipedia-style tipping model to support the democratization of AI ethics literacy and to ensure we can continue to serve our community.
*NOTE: When you hit the subscribe button, you may end up on the Substack page where if you’re not already signed into Substack, you’ll be asked to enter your email address again. Please do so and you’ll be directed to the page where you can purchase the subscription.
This week’s overview:
✍️ What we’re thinking:
Building for Resiliency in AI Systems
🔬 Research summaries:
Energy and Policy Considerations in Deep Learning for NLP
From the Gut? Questions on Artificial Intelligence and Music
Is AI Greening Global Supply Chains?
📰 Article summaries:
Language models like GPT-3 could herald a new type of search engine (MIT Tech Review)
Twitter's Photo Crop Algorithm Favors White Faces and Women (Wired)
Understanding Contextual Facial Expressions Across the Globe (Google AI Blog)
The Future Of Work Now: Ethical AI At Salesforce (Forbes)
But first, our call-to-action this week:
The Triangle of Trust in Conversational Ethics and Design: Where Bots, Language and AI Intersect
We’re partnering with Salesforce to host a discussion about conversational ethics and design. In this workshop, we will discuss the issues of bias, harm, and trust where bots, language, and AI intersect.
📅 June 10th (Thursday)
🕛 12:00PM – 1:30PM EST
🎫 Get free tickets
✍️ What we’re thinking:
From the Founder’s Desk:
Building for Resiliency in AI Systems
As AI systems become ever more pervasively deployed in mission-critical contexts, we need to think about the notion of resiliency as a key characteristic of AI systems. In this article, I talk more about 4 key ideas that can enable resiliency: adversarial examples, criticality of system, failover mechanism, and the “spaghetti” factor.
To delve deeper, read the full article here.
🔬 Research summaries:
Energy and Policy Considerations in Deep Learning for NLP
As we inch towards ever-larger AI models, we have entered an era where achieving state-of-the-art results has become a function of access to huge compute and data infrastructure in addition to fundamental research capabilities. This is leading to inequity and impacting the environment due to high energy consumption in the training of these systems. The paper provides recommendations for the NLP community to alter this antipattern by making energy and policy considerations central to the research process.
To delve deeper, read the full summary here.
From the Gut? Questions on Artificial Intelligence and Music
Can AI actually create music, or can it solely produce such a phenomenon? Should their creations be covered by copyright law? Whether a musical purist a technological advocate, the very soul of music comes into question in this topic, especially surrounding AI creativity.
To delve deeper, read the full summary here.
Is AI Greening Global Supply Chains?
Industry leaders are quick to champion AI as a transformative force for environmental sustainability. In this research paper, Peter Dauvergne examines their claims through a critical international political economy (IPE) lens, and finds that AI’s environmental sustainability benefits for the global supply chain are overstated.
To delve deeper, read the full summary here.
📰 Article summaries:
Language models like GPT-3 could herald a new type of search engine (MIT Tech Review)
What happened: Google has come out with a new proposal to alter the search experience. They propose to use large language models to replace the current paradigm of the find-index-rank approach to presenting results for our search queries. Typically, we only receive a list of potential matches for things that we search online. The final decision of what is relevant to us is decided by us. A helpful example in the article is comparing this approach to asking your doctor a question and them responding to you with a series of documents that you have to read to find what you are looking for.
Why it matters: The obvious concern with utilizing large language models is the degree to which they can change the power dynamics, which already skew away from people being able to tell apart false sources correctly. Answering queries in a natural language fashion through a black box without any explanation for how they arrived at the result is problematic because it can lean towards either algorithmic aversion or automation bias.
Between the lines: Such an approach has tremendous potential to alter our relationship to information gathering. In particular, this has the potential to make knowledge discovery more accessible, especially in cases where English (since that is the lingua franca of the web) might not be native to users and the information that they are hunting for might be in English and they are unable to craft the right query to surface that information.
Twitter's Photo Crop Algorithm Favors White Faces and Women (Wired)
What happened: A few months ago, Twitter users had noticed that when there were multiple people in an image and the image was cropped for display in the timeline, the Twitter cropping system disfavored women and Black people. An analysis on close to 10000 image pairs by the Twitter ethics team has unveiled that indeed there was bias in the saliency algorithm used by the system and they have discontinued its use.
Why it matters: While there was anecdotal evidence and small-scale analysis done by researchers in the wild, a more systematic analysis undertaken by Twitter showcasing the same concerns is validation for how community-generated insights can be used to drive AI ethics research. What is also interesting is that an explanation for what was happening has been provided in the paper that has been arXiv.
Between the lines: In recent months, as questions have been raised about the efficacy of corporate AI ethics teams, such a study adds a glimmer of hope that useful things can emerge from such endeavours. More so, as a part of our work at the Montreal AI Ethics Institute, we are exploring how a more cooperative approach between civil society and corporations can actually yield better outcomes than the current adversarial state of affairs between the two communities (which has legitimate grounds given the historical interactions between the two groups).
Understanding Contextual Facial Expressions Across the Globe (Google AI Blog)
What happened: Facial expressions are said to vary across different parts of the world. The sentiments we try to express might manifest differently on our faces, potentially depending on what culture and part of the world we grew up in. But, before this research, there was scant evidence, often with contradicting results because of the difficulty in amassing and analyzing a large dataset. Google collated a dataset with ~6 million videos across 144 countries and checked them for 16 facial expressions.
Why it matters: They found that there was 70% consistency in facial expressions used to convey different sentiments across the world. They also found that the social contexts within which the emotions were expressed had the strongest correlation; it held across different regions, especially so for regions that are closer together showing the pattern of spread of human culture. To check for biases that might arise in such an analysis, the researchers checked across demographic groups and also analyzed the results across video-topic and text-topic analyses to check for consistency in the findings.
Between the lines: While there are many problems with relying on automated tools to make inferences about expressed emotions and what they might mean, reading through the findings of this research paper and applying them carefully and in a context-appropriate fashion can yield some interesting applications. It also showcases how machine learning can be used as a tool to strengthen results from other fields when data collection and analysis might be hard and those studies were previously limited to small sizes.
The Future Of Work Now: Ethical AI At Salesforce (Forbes)
What happened: The article offers insights into the Ethical AI practice at Salesforce with some practical lessons on how they have scaled internally and how they have provided support to external customers. It features Kathy Baxter, the Ethical AI Architect at Salesforce articulating their strategy which follows engaging, advising, and adopting as key pillars to actualize ethical AI practice within the organization.
Why it matters: Such details on how companies are actually implementing AI ethics in practice are essential to build trust with customers and the public, more so in a time where moves by tech companies are being heavily scrutinized. As a practitioner, what caught my attention was their use of a model card generator as a way to make model cards more practical and real compared to the largely theoretical construct before.
Between the lines: I foresee more companies becoming transparent about their AI ethics methodologies! There is a lot to be learned from each other’s work, especially in the nascent stages of this field. The recent case study published by the World Economic Forum talks about Microsoft’s Responsible AI practice with a lot of details that can serve as a blueprint for other organizations that are seeking to get started.
From elsewhere on the web:
COVID-19 and Rethinking the role of AI in assisting with the mental health crisis
“We must consider the ethical consequences of the acceleration of the mental health economy and ensure that individuals who most require personalized, face-to-face psychological services are not simply redirected towards mass-manufactured digital tools that lack clinical oversight. Indeed, technology is no quick fix for the long-standing inequities in the provision and delivery of mental services- but it can help shed light on the issues that need to be redressed.”
To delve deeper, read the full article here.
In case you missed it:
The State of AI Ethics Panel (video)
Now that we’re nearly halfway through 2021, what’s next for AI Ethics? Hear from a world-class panel, including:
• Soraj Hongladarom — Professor of Philosophy and Director, Center for Science, Technology and Society at Chulalongkorn University in Bangkok (@Sonamsangbo)
• Dr. Alexa Hagerty — Anthropologist, University of Cambridge’s Centre for the Study of Existential Risk (@anthroptimist)
• Connor Leahy — Leader at EleutherAI (@NPCollapse)
• Stella Biderman — Leader at EleutherAI (@BlancheMinerva)
• Abhishek Gupta - Founder, Montreal AI Ethics Institute (@atg_abhishek)
To delve deeper, watch the full panel discussion.
Take Action:
Events:
The Triangle of Trust in Conversational Ethics and Design: Where Bots, Language and AI Intersect
We’re partnering with Salesforce to host a discussion about conversational ethics and design.
Conversational AI enables people to communicate via text or voice with automated systems like smart speakers, virtual assistants, and chatbots. Leveraging Automatic Speech Recognition (ASR) and Natural Language Processing (NLP), these systems can recognize speech, understand context, remember previous dialogue, access external knowledge, and generate text or speech responses.
However, conversational AI may not work equally well for everyone, and may even cause harm due to known or unknown bias and toxicity. Additionally, generating “personalities” for bots or virtual assistants creates risks of appearing inauthentic, manipulative, or offensive. In this workshop, we will discuss the issues of bias, harm, and trust where bots, language, and AI intersect.
📅 June 10th (Thursday)
🕛 12:00PM – 1:30PM EST
🎫 Get free tickets