
AI Ethics Brief #124: Human-AI collaboration, AIES 2023 Insights, robots based on LLMs, CV's impact on human autonomy, and more.
Should AI systems be terrified of humans?
Welcome to another edition of the Montreal AI Ethics Institute’s weekly AI Ethics Brief that will help you keep up with the fast-changing world of AI Ethics! Every week we summarize the best of AI Ethics research and reporting, along with some commentary. More about us at montrealethics.ai/about.
Support our work through Substack
💖 To keep our content free for everyone, we ask those who can, to support us: become a paying subscriber for the price of a couple of ☕.
If you’d prefer to make a one-time donation, visit our donation page. We use this Wikipedia-style tipping model to support our mission of Democratizing AI Ethics Literacy and to ensure we can continue to serve our community.
This week’s overview:
🙋 Ask an AI Ethicist:
What are the benefits of adopting a globalized approach to AI governance?
✍️ What we’re thinking:
Deciding who decides: AIES 2023 - Day 1
Hallucinating and moving fast
Moving the needle on the voluntary AI commitments to the White House
Enabling collective intelligence: FOSSY Day 4
Computer Vision’s implications for human autonomy
🤔 One question we’re pondering:
How do you keep pace with the large volume of outputs in the field of Responsible AI?
🔬 Research summaries:
Modeling Content Creator Incentives on Algorithm-Curated Platforms
Towards Algorithmic Fairness in Space-Time: Filling in Black Holes and Detecting Bias in the Presence of Spatial Autocorrelation
Human-AI Collaboration in Decision-Making: Beyond Learning to Defer
📰 Article summaries:
Insight: Race towards 'autonomous' AI agents grips Silicon Valley | Reuters
AI Should Be Terrified of Humans | Time
Aided by A.I. Language Models, Google’s Robots Are Getting Smart | New York Times
📖 Living Dictionary:
What is the relevance of diffusion models to AI ethics?
🌐 From elsewhere on the web:
RESPECT AI: Governance for growth with Abhishek Gupta of Montreal AI Ethics Institute
💡 ICYMI
Designing for Meaningful Human Control in Military Human-Machine Teams
🤝 You can now refer your friends to The AI Ethics Brief!
Thank you for reading The AI Ethics Brief — your support allows us to keep doing this work. If you enjoy The AI Ethics Brief, it would mean the world to us if you invited friends to subscribe and read with us. If you refer friends, you will receive benefits that give you special access to The AI Ethics Brief.
How to participate
1. Share The AI Ethics Brief. When you use the referral link below, or the “Share” button on any post, you'll get credit for any new subscribers. Simply send the link in a text, email, or share it on social media with friends.
2. Earn benefits. When more friends use your referral link to subscribe (free or paid), you’ll receive special benefits.
Get a 3-month comp for 25 referrals
Get a 6-month comp for 75 referrals
Get a 12-month comp for 150 referrals
🤗 Thank you for helping get the word out about The AI Ethics Brief!
🚨 The Responsible AI Bulletin
We’ve restarted our sister publication, The Responsible AI Bulletin, as a fast-digest every Sunday for those who want even more content beyond The AI Ethics Brief. (our lovely power readers 🏋🏽, thank you for writing in and requesting it!)
The focus of the Bulletin is to give you a quick dose of the latest research papers that caught our attention in addition to the ones covered here.
🙋 Ask an AI Ethicist:
Every week, we’ll feature a question from the MAIEI community and share our thinking here. We invite you to ask yours and we’ll answer it in the upcoming editions.
Here are the results from the previous edition for this segment:
Off late, there are several papers and organizations that have started steering the broader Responsible AI community toward a globalized approach to AI governance. Our readers wrote to us asking if this is necessarily the best approach. What are some of the pros and cons?
Some of the benefits of adopting a globalized approach include:
Setting universal standards, e.g., similar to the UN Declaration of Human Rights, which can be pivotal in preventing disparities in how AI systems are used in different parts of the world.
Enabling interoperability, allowing countries to import technologies from elsewhere when they might lack resources themselves to build a system, e.g., in providing advanced medical diagnostics in resource-constrained settings with different kinds of devices and operating standards.
Encouraging accessibility by bringing diverse needs across the world to the forefront, e.g., building systems that are affordable and work for populations with characteristics different from those where the system was developed.
Some of the potential downsides include:
Imposition of norms and practices developed in one part of the world without accounting for the local context where the system is going to be deployed, e.g., differing privacy norms based on culture in different parts of the world.
Challenging enforcement, especially when there are other spheres of conflict at a geopolitical level which will hinder adoption of an approach that is seen to be promoted by a nation with which one is engaged in conflicts on other issues.
High cost of adherence which can stifle innovation and indigenous development efforts when there is a lack of resources and expertise to implement requirements demanded by the globalized approach.
The two lists above are just brief samples of arguments for/against a globalized approach and it remains an incredibly nuanced and complex endeavor to find the right balance to govern AI systems, especially around the world.
When thinking about the above, what components and elements will be critical to ensure that regional context and cultural norms are not subsumed without consent in a globalized approach? Share your thoughts with the MAIEI community:
✍️ What we’re thinking:
Deciding who decides: AIES 2023 - Day 1
Shifting the AI governance paradigm from an oligopoly into something more democratic is how we can all reap the benefits of collective intelligence. We’re here in Montreal this week to learn from the researchers chipping away at each piece of this problem.
To delve deeper, read the full article here.
"Move fast and break things" is broken. But we've all said that many times before. Instead, I believe we need to adopt the "Move fast and fix things" approach. Given the rapid pace of innovation and its distributed nature across many diverse actors in the ecosystem building new capabilities, realistically, it is infeasible to hope to course-correct at the same pace. Because course correction is a much harder and slow-yielding activity, this ends up amplifying the magnitude of the impact of negative consequences.
To delve deeper, read the full article here.
Moving the needle on the voluntary AI commitments to the White House
The recent voluntary commitments secured by the White House from the core developers of advanced AI systems (OpenAI, Microsoft, Anthropic, Inflection, Amazon, Google, and Meta) presents an important first step in building and using safe, secure, and trustworthy AI. While it is easy to shrug aside voluntary commitments as "ethics washing," we believe that they are a welcome change.
To delve deeper, read the full article here.
Enabling collective intelligence: FOSSY Day 4
Collective intelligence (CI) runs on contribution, but setting up a system to elicit collective intelligence isn’t easy. Many open-source projects are created and maintained by an individual founder, who sometimes claims the title of “Benevolent Dictator for Life.” It’s halfway a joke, pointing out the inherent tension between a participatory project and the unilateral actions it takes to set one up. Nothing is free – no margin, no mission. But figuring out how to distribute power quickly and effectively is essential to generate CI.
To delve deeper, read the full article series here.
Computer Vision’s implications for human autonomy
The increasing development and use of computer vision applications give rise to various ethical and societal issues. In this blog, I discuss computer vision’s impact on privacy, identity, and human agency, as a part of my column on the ethics of computer vision.
To delve deeper, read the full article here.
🤔 One question we’re pondering:
Keeping up with research in the field of AI is incredibly hard. Even the subfield of Responsible AI has been growing at a very fast clip. We have new frameworks, ideas, regulatory developments, mishaps, and more happen almost on a weekly basis, if not more frequently. Challenging as it may be, at MAIEI, we rely on our team’s ability to serve as a curation filter to find what is worth paying attention to. In addition, our community members routinely share interesting articles and research papers for us to consider. What are some of your favorite sources to stay abreast in the field?
We’d love to hear from you and share your thoughts back with everyone in the next edition:
🔬 Research summaries:
Modeling Content Creator Incentives on Algorithm-Curated Platforms
While content creators on online platforms compete for user attention, their exposure crucially depends on algorithmic choices made by the platform. In this paper, we formalize exposure games, a model of the incentives induced by recommender systems. We prove that seemingly innocuous algorithmic choices in modern recommenders may affect incentivized creator behaviors in significant and unexpected ways. We develop techniques to numerically find equilibria in exposure games and leverage them for pre-deployment audits of recommender systems.
To delve deeper, read the full summary here.
Given the recent deluge of research in algorithmic fairness, the lack of attention devoted to fairness problems in spatiotemporal data is surprising. These two papers initiate the systematic study of spatial fairness, motivating the need for developing spatial techniques applicable to real life situations, and proposing a framework for algorithmic bias detection for spatial data.
To delve deeper, read the full summary here.
Human-AI Collaboration in Decision-Making: Beyond Learning to Defer
Human-AI collaboration (HAIC) in decision-making aims to create synergistic teaming between human decision-makers and AI systems. State-of-the-art methods to manage assignments in human-AI collaboration entail unfeasible requirements, such as concurrent predictions from every human, and leave key challenges unaddressed, such as human capacity constraints. We aim to identify and review these limitations, pointing to where opportunities for future research in HAIC may lie.
To delve deeper, read the full summary here.
📰 Article summaries:
Insight: Race towards 'autonomous' AI agents grips Silicon Valley | Reuters
What happened: A new generation of AI assistants, known as "agents" or "copilots," are emerging with greater autonomy and capabilities, running on advanced models like GPT-4. These agents are attracting significant investment in Silicon Valley and promise to handle more complex tasks without constant supervision. While they are not yet at the level of science fiction's AI assistants, they can already perform various functions, from ordering food to creating investment strategies.
Why it matters: Developing these advanced AI agents represents a step towards achieving artificial general intelligence (AGI), which can match or surpass human cognitive abilities. Interviews with experts and investors reveal a growing excitement for autonomous agents and their potential for the future. However, concerns about biases, misinformation, and the lack of human oversight raise ethical and safety challenges.
Between the lines: As the AI industry moves forward, there are contrasting opinions on the potential risks and benefits. While some are optimistic about developing sophisticated AI assistants, others highlight the potential dangers, such as harmful actions driven by AI with its own goals. The debate around AI ethics and the need for regulation becomes more critical as technology advances and allegedly moves towards AGI.
AI Should Be Terrified of Humans | Time
What happened: Kateman discusses the need to consider the ethical implications of artificial intelligence (AI) and machine learning as they develop rapidly. It parallels the mistreatment of animals and the potential risks of not addressing AI welfare seriously. The rise of AI is seen as a potential risk to humanity, with concerns about social biases, the possibility of a digital-being uprising, and the need to recognize the potential sentience of AI.
Why it matters: The text emphasizes that the ethical treatment of AI is crucial due to the risks it poses to humans and its potential impact on society. There are concerns about encoding biases into AI systems, leading to harmful effects in areas like healthcare and law enforcement. Additionally, as AI advances, there is a growing debate about recognizing AI's potential sentience and treating it ethically, avoiding the mistakes made in treating non-human animals.
Between the lines: Kateman suggests that society must address the ethical challenges surrounding AI and treat AI with the assumption of potential sentience. As AI continues to evolve, there is a risk of repeating the same mistakes in mistreating animals. It calls for greater awareness and consideration of AI welfare to prevent potential harm and suffering caused by human actions toward AI beings.
Aided by A.I. Language Models, Google’s Robots Are Getting Smart | New York Times
What happened: Google has integrated state-of-the-art language models into its robots, enabling them to learn new skills and make logical connections. The project, called RT-2, allows robots to understand and solve problems using language processing capabilities.
Why it matters: This integration of language models with robots represents a significant advancement in robotics, allowing them to become smarter and more capable. The ability to connect semantics with robots is exciting for the field of robotics, and it opens up possibilities for robots to be used in various real-world applications, such as warehouses, medicine, and household assistance.
Between the lines: While the integration shows great promise, there are challenges to address. Using AI language models in robotics introduces risks, given that the models may make mistakes or produce nonsensical answers. Google's researchers are optimistic about the future potential of language-equipped robots in various environments. However, further development and testing are needed to ensure their safe and effective deployment in real-world settings.
📖 From our Living Dictionary:
What is the relevance of diffusion models to AI ethics?
👇 Learn more about why it matters in AI Ethics via our Living Dictionary.
🌐 From elsewhere on the web:
RESPECT AI: Governance for growth with Abhishek Gupta of Montreal AI Ethics Institute
The rapid democratization of AI has left many organizations seeking guidance on how to move ahead responsibly. In this article, we talk with Abhishek Gupta, Founder and Principal Researcher at the Montreal AI Ethics Institute, about how organizations can start to realign their governance models to manage risk and unlock growth with responsible AI.
To delve deeper, read the full article here.
💡 In case you missed it:
Designing for Meaningful Human Control in Military Human-Machine Teams
Ethical principles of responsible AI in the military state that moral decision-making must be under meaningful human control. This paper proposes a way to operationalize this principle by proposing methods for analysis, design, and evaluation.
To delve deeper, read the full article here.
Take Action:
We’d love to hear from you, our readers, on what recent research papers caught your attention. We’re looking for ones that have been published in journals or as a part of conference proceedings.
Currently reading 'Ethics for people who work in Tech', 'Robot Souls', and 'short introduction to Sociology'.
- I would recommend 'Ethical Machines' for those implementing processes in the work place,
- 'AI ethics' for getting an overview,
- 'Privacy is Power' to anyone, as its great!
- 'Stories from 2045' if you're interested in AI, economics and UBI
... sure theres more but that's what i got for now :)
The MAEI breifings and end of years are super helpful. I loving listening to podcasts and watching youtube videos (also gives me ideas for my own www.machine-ethics.net). I read some papers but also ALOT of books which i post on my patreon and instagram @machineethicspodcast