AI Ethics Brief #101: Clueless AI, Russia's AI Strategy, Race & AI, and more ...
How do we evaluate methodologies to increase AI transparency?
Welcome to another edition of the Montreal AI Ethics Institute’s weekly AI Ethics Brief that will help you keep up with the fast-changing world of AI Ethics! Every week we summarize the best of AI Ethics research and reporting, along with some commentary. More about us at montrealethics.ai/about.
⏰ This week’s Brief is a ~29-minute read.
Support our work through Substack
💖 To keep our content free for everyone, we ask those who can, to support us: become a paying subscriber for as little as $5/month. If you’d prefer to make a one-time donation, visit our donation page. We use this Wikipedia-style tipping model to support the democratization of AI ethics literacy and to ensure we can continue to serve our community.
*NOTE: When you hit the subscribe button, you may end up on the Substack page where if you’re not already signed into Substack, you’ll be asked to enter your email address again. Please do so and you’ll be directed to the page where you can purchase the subscription.
As we head into the next “century” (we published edition #100 last week) of AI Ethics Briefs, please do share your feedback with us on how we can do better in continuing to bring you value. Thanks!
This week’s overview:
✍️ What we’re thinking:
Real talk: What is Responsible AI?
🔬 Research summaries:
Clueless AI: Should AI Models Report to Us When They Are Clueless?
Evaluating a Methodology for Increasing AI Transparency: A Case Study
Not Quite ‘Ask a Librarian’: AI on the Nature, Value, and Future of LIS
Eticas Foundation external audits VioGén: Spain’s algorithm designed to protect victims of gender violence
Russia’s Artificial Intelligence Strategy: The Role of State-Owned Firms
Race and AI: the Diversity Dilemma
📰 Article summaries:
Intel calls its AI that detects student emotions a teaching tool. Others call it 'morally reprehensible.'
Shared Responsibility: Enacting Military AI Ethics in U.S. Coalitions
We're Publishing the Facebook Papers. Here's What They Say About the Ranking Algorithms That Control Your News Feed.
📖 Living Dictionary:
What is algorithmic pricing?
💡 ICYMI
Online public discourse on artificial intelligence and ethics in China: context, content, and implications
But first, our call-to-action this week:
We’d love to hear from you, our readers, on what recent research papers caught your attention. We’re looking for ones that have been published in journals or as a part of conference proceedings.
✍️ What we’re thinking:
Real talk: What is Responsible AI?
It is now clear that two forces will majorly define our future: the advancement of intelligent technologies and the societal response to such an advancement. Many critical decisions relating to business, economic, and social domains rely on AI, so it is essential to ensure ethical frameworks are adhered to. Nevertheless recent breakthroughs have led to questioning the direction of the AI revolution. Over the past few years, many algorithms embedding historic prejudice have contributed to the perpetuation of bias and inequality – primarily affecting those underrepresented in our society.
To delve deeper, read the full article here.
To delve deeper, read the full article here.
🔬 Research summaries:
Clueless AI: Should AI Models Report to Us When They Are Clueless?
AI Models can extrapolate in socially significant ways, outside the range of data they have been trained on, often without our knowing. This paper draws on several of our studies, arguing that models should report when and in which ways they extrapolate. Our studies suggest that policymakers should incorporate this requirement in AI regulations. In the absence of such regulations, civil society and individuals may consider using the legal system to inquire about extrapolations performed by automated systems.
To delve deeper, read the full summary here.
Evaluating a Methodology for Increasing AI Transparency: A Case Study
This paper discussed the efficacy of the AI FactSheet methodology, a user-centered technique for identifying, gathering, and presenting AI documentation. We report on a development team that used the methodology in their AI documentation process over a three-month period. We found that the methodology was readily incorporated and resulted in high-quality documentation that met the needs of several different roles.
To delve deeper, read the full summary here.
Not Quite ‘Ask a Librarian’: AI on the Nature, Value, and Future of LIS
AI powered by large language models (LLMs) is increasingly used for analytical tasks, but it be used to generate scientific insight or forecast industry trends? This paper examined GPT-3’s responses to difficult questions about the nature, value, and future of libraries and of library and information science, and found the responses are of limited usefulness, contain misinformation, and can be problematic.
To delve deeper, read the full summary here.
VioGén is an algorithm that determines the level of risk faced by a victim of gender-based violence and establishes her protection measures in Spain. It is the largest risk assessment system in the world, with more than 3 million registered cases.
To delve deeper, read the full summary here.
Russia’s Artificial Intelligence Strategy: The Role of State-Owned Firms
In recent years, many states have developed AI strategies in order to guide the research, development, and deployment of AI and AI related technologies. This paper examines Russia’s AI strategy and its unique structure, in particular the role of state-owned enterprises in developing and leading the strategy.
To delve deeper, read the full summary here.
Race and AI: the Diversity Dilemma
Are white and non-white AI presented as equals? The authors argue not. While more diversity helps dissipate this effect, the problem cannot be solved by more diversity alone.
To delve deeper, read the full summary here.
📰 Article summaries:
What happened: Intel and Classroom Technologies, which is a company that sells virtual school software, have partnered to integrate an AI-based technology that can detect whether a student is bored, distracted or confused based on their facial expressions and interactions with educational content. It should be noted that Intel’s partnership to test the technology in the Class software at this stage is a research proof of concept.
Why it matters: Although some teachers may find value in the use of emotion AI technology, there has been some controversy. Several researchers have challenged the single label categorization of emotion due to the fact that people express themselves through hundreds of complex facial expressions, body gestures or physiological signals. Moreover, the communication of various emotions like anger or surprise varies across cultures and situations. There have also been alarms raised over the fact that this technology has the potential to be associated with excessive student surveillance and privacy invasions, which are issues that have been on the rise since the pandemic began.
Between the lines: Intel’s technology would require cameras to be turned on in order to capture students’ facial expressions. This points to another set of challenges surrounding “cameras as a social-justice issue.” Some students may not want others seeing where they live. In addition, there may be accessibility barriers since cameras drain power and also use up a significant amount of bandwidth, which some students may not be able to afford. If this technology is productized, will it turn into a surveillance system that penalizes students in the virtual classroom and exacerbates socio-technical issues?
Shared Responsibility: Enacting Military AI Ethics in U.S. Coalitions
What happened: The Department of Defense (DoD) is working to establish that the U.S. military can deter and fight AI-infused armed conflicts as part of future coalitions using ethical AI. This article outlines three concrete steps to ensure that the U.S. defense enterprise and its partners’ technology align with the structures in coalitions which use AI-enabled weapons. (1) Foundations of AI responsibility in U.S. alliances and partnerships (2) Diversity of perspectives (3) A Responsible AI Coalition
Why it matters: Firstly, the Department must work to avoid the misuse or failure of AI-enabled weapons in future coalition operations. The power lies with commanders and political leaders who determine whether the employment of any weapons system in armed conflict is “ethical.” Another consideration for U.S. leaders to keep in mind for future coalition operations involving AI is that their allies may not be “reading from an identical political or ethical playbook.” Lastly, the DoD may benefit from establishing a common language amongst policymakers, commanders, and technical experts in future coalitions to facilitate communication about AI systems on the battlefield.
Between the lines: Human judgment is becoming increasingly more important as AI becomes a key consideration in war. Recently, the focus has been on creating broad principles for AI development and targeting the technical enablers of multinational uses of AI. However, more needs to be done, because the ethics of military AI depends on the choices leaders make about the use of AI-enabled weapons. The United States and its allies will need to focus on more than broad ethical principles and technical solutions.
What happened: The Facebook Papers are a collection of documents that offers unprecedented insight on the most powerful social media company in the world. These records were first provided to Congress last year by Frances Haugen and are now being made available to the public. The batch of files included in this article put a spotlight on Facebook’s ranking system, which refers to how the company prioritizes what content users see in their News Feed.
Why it matters: One employee wrote that the evidence favoring ranking is “extensive, and nearly universal,” because usage and engagement increases. However, as the “modified feeds boost consumption,” it seems that the quality of the user's experience declines. These papers also raise several important questions. How does Facebook balance maximizing profit with encouraging behaviors the algorithms’ designers knew weren’t healthy? What constitutes a meaningful social interaction?
Between the lines: Facebook began as a platform used for sharing information with friends and family. As their features and algorithms developed over time, ultimately leading to a rapid growth of users, the company has found ways to keep people engaged. As it stands, Facebook’s future depends on their ability to rank content. But this complex system encourages the sharing of fewer meaningful posts, while creating a perfect environment for misinformation.
📖 From our Living Dictionary:
👇 Learn more about why it matters in AI Ethics via our Living Dictionary.
💡 In case you missed it:
The societal and ethical implications of artificial intelligence (AI) have sparked vibrant online discussions in China. This paper analyzed a large sample of these discussions which offered a valuable source for understanding the future trajectory of AI development in China as well as implications for global dialogue on AI governance.
To delve deeper, read the full article here.
Take Action:
We’d love to hear from you, our readers, on what recent research papers caught your attention. We’re looking for ones that have been published in journals or as a part of conference proceedings.