AI Ethics Brief #100: DeepMind's Generalist Agent, Technical AI Ethics, AI in recruitment, and more ...
What is the data collection process that powers Spotify's Discover Weekly?
Welcome to another edition of the Montreal AI Ethics Institute’s weekly AI Ethics Brief that will help you keep up with the fast-changing world of AI Ethics! Every week we summarize the best of AI Ethics research and reporting, along with some commentary. More about us at montrealethics.ai/about.
⏰ This week’s Brief is a ~34-minute read.
Support our work through Substack
💖 To keep our content free for everyone, we ask those who can, to support us: become a paying subscriber for as little as $5/month. If you’d prefer to make a one-time donation, visit our donation page. We use this Wikipedia-style tipping model to support the democratization of AI ethics literacy and to ensure we can continue to serve our community.
*NOTE: When you hit the subscribe button, you may end up on the Substack page where if you’re not already signed into Substack, you’ll be asked to enter your email address again. Please do so and you’ll be directed to the page where you can purchase the subscription.
This week is particularly special, because we are proud to present our 100th edition of the AI Ethics Brief!
Thank you for all your support, and we’re excited to continue delivering content to your inbox each week to help you keep up with the fast-changing world of AI Ethics!
This week’s overview:
✍️ What we’re thinking:
Global AI Ethics: Examples, Directory, and a Call to Action
Discover Weekly: How the Music Platform Spotify Collects and Uses Your Data
Masa’s Review of “Ethics of AI‑Enabled Recruiting and Selection: A Review and Research Agenda“
🔬 Research summaries:
A Generalist Agent
2022 AI Index Report – Technical AI Ethics Chapter
A survey on adversarial attacks and defences
Ethics in the Software Development Process: from Codes of Conduct to Ethical Deliberation
Ethics of AI‑Enabled Recruiting and Selection: A Review and Research Agenda
Episodio 3 – Idoia Salazar: Sobre la Vital Importancia de Educar al Ciudadano en los Usos Responsables de la Inteligencia Artificial
📰 Article summaries:
7 Revealing Ways AIs Fail
The EU AI Act: How to (truly) protect people on the move
📖 Living Dictionary:
A helpful link on Anthropomorphism
🌐 From elsewhere on the web:
Paris Peace Forum – Call for Projects on AI in International Development
💡 ICYMI
Challenges of AI Development in Vietnam: Funding, Talent and Ethics
But first, our call-to-action this week:
We’d love to hear from you, our readers, on what recent research papers caught your attention. We’re looking for ones that have been published in journals or as a part of conference proceedings.
✍️ What we’re thinking:
Global AI Ethics: Examples, Directory, and a Call to Action
As AI applications become widespread, it is increasingly important to understand and manage their impact on people, society, and the environment. AI ethicists have been working hard toward this goal, producing research, policies, and tools to evaluate and improve the ethical dimensions of AI systems. However, a major gap in AI ethics has emerged: Western countries are massively over-represented. For example, studies have shown that the US and European countries dominate the production of AI ethics guidelines (e.g., this study and this study).
The Western dominance disadvantages those who are affiliated with other parts of the world and therefore inhibits the field of AI ethics. In this article, I review a few examples of work in AI ethics that center on non-Western issues, highlighting non-Western values, needs, circumstances, and perspectives on AI.
To delve deeper, read the full article here.
Discover Weekly: How the Music Platform Spotify Collects and Uses Your Data
In 2022, we live in an interconnected world where everyone can share the highlights of their life across social media. One of the applications most commonly shared is Spotify, the Swedish-based audio streaming service that houses a range of recorded music and podcasts for public consumption. Spotify is embedded within social media applications, such as Instagram and Twitter, to allow for easier sharing, making it accessible to a wide audience. Spotify curates your personal entertainment tastes to create playlists based on aspects ranging from your location to your current mood. Although this technology can be seen as interesting and harmless, it is important to understand how this algorithm works and affects your personal privacy.
To delve deeper, read the full article here.
Masa’s Review of “Ethics of AI‑Enabled Recruiting and Selection: A Review and Research Agenda“
🔬 Research summaries:
This paper introduces a single neural network capable of performing hundreds of distinct tasks, including: chatting, stacking blocks with a real robot arm, captioning images, and more. The successes and limitations of training a general agent are discussed, as well as the implications to machine safety.
To delve deeper, read the full summary here.
2022 AI Index Report – Technical AI Ethics Chapter
The 2022 AI Index report tracks, collates, distills, and visualizes data relating to artificial intelligence, enabling decision-makers to take meaningful action to advance AI responsibly and ethically with humans in mind. The 2022 edition includes a new chapter on technical AI ethics, highlighting metrics adopted by the research community related to the measurement of fairness and bias in artificial intelligence systems.
To delve deeper, read the full summary here.
A survey on adversarial attacks and defences
Deep learning systems are increasingly susceptible to adversarial threats. As a result, it is imperative to evaluate methods for adversarial robustness.
To delve deeper, read the full summary here.
Ethics in the Software Development Process: from Codes of Conduct to Ethical Deliberation
This article disentangles ethical considerations that can be performed at the level of the software engineer from those that belong in the wider domain of business ethics. The handling of ethical problems that fall into the responsibility of the engineer has traditionally been addressed by the publication of Codes of Ethics and Conduct (CoC). We argue that these Codes are barely able to provide normative orientation for ethical decision making (EDM) in software development.
To delve deeper, read the full summary here.
Ethics of AI‑Enabled Recruiting and Selection: A Review and Research Agenda
While companies increasingly deploy artificial intelligence (AI) technologies in their personnel recruiting and selection process, the subject is still an emerging topic in academic literature. As these new technologies significantly impact people’s lives and careers, but also trigger ethical concerns, the ethicality of these AI applications needs to be comprehensively understood. This paper reviews extant literature on AI recruiting and maps the ethical opportunities, risks, and ambiguities, as well as the proposed ways to mitigate ethical risks in practice.
To delve deeper, read the full summary here.
Este podcast liderado por Juan Carlos Muñoz presenta la carrera de Idoia Salazar. Es la cofundadora de un observatorio sobre el impacto ético de la intelgencia artificial (OdiseIA) y su rol como profesora en la Universidad CEU San Pablo, entre más puestos impresionantes. Con respecto a la visión ética de OdiseIA es ser muy realista, reaccionando a cómo actúan las empresas y las personas dentro del estudio de la IA. De ahí, el podcast cubre lo que significa ser un eticista de la IA, los sesgos algorítmicos y la importancia de la educación antes de presentar la situación actual de España en cuanto a la IA. Para terminar, concluye que lo que resulta lo más pertinente es que la tecnología no es peligrosa en sí, sino es nuestra manera de usarla que cuenta.
To delve deeper, read the full summary here.
📰 Article summaries:
What happened: It comes as no surprise that AI fails. Over the past few years, the AI community has been cataloging these failures and monitoring the risks they may pose to our society. The unpredictability of the types of problems AI can solve lies at the core of many of these issues, which is partially due to the fact that we do not understand intelligence very well. The failures discussed in this article range from embedded bias to catastrophic forgetting to explainability.
Why it matters: It should be noted that each failure has unique implications. For example, in 2016, Tesla reported its first fatality after a Model S car driving on autopilot collided with a truck that was making a turn, killing its driver. Another instance was provided in a different context when scientists found a nationally deployed health care algorithm in the United States that was racially biased. It assumed that people with high healthcare costs were also the sickest patients and most in need of care. However, this overlooked systemic racism and the fact that black patients are less likely to get health care when they need it, so they generate less costs. It is clear that these AI failures are not trivial, and they continue to affect people’s lives.
Between the lines: Some of AI’s flaws, such as its inability to accurately and consistently solve math problems, may be counterintuitive to some. However, taking a closer look highlights the fact that neural networks approach problems in a parallel manner, whereas math problems typically require a long series of steps to solve. We must remember that the way in which AI processes data is extremely complex and perhaps still misunderstood.
The EU AI Act: How to (truly) protect people on the move
What happened: Currently, the EU AI Act does not address the impacts of AI systems on non-EU citizens and people on the move, such as those fleeing from war. This article presents the following three steps for policymakers to make the AI Act an instrument of protection for people on the move. (1) Banning AI systems such as risk assessments and lie detectors (2) Expanding the list of “high-risk” AI systems used in migration (3) Abolish Article 83 to protect, rather than surveil, people on the move
Why it matters: One must peel back the layers to find the true risks involved with the use of AI systems in migration. Civil society organizations and researchers have raised concerns regarding the intrinsic bias of automated-profiling systems, which can reinforce existing forms of oppression and racism. Moreover, there are several AI systems, such as biometric identification systems and border control surveillance, that should be included in the list of “high-risk” AI systems.
Between the lines: An important consideration to keep in mind when discussing these matters is the significance of cultural nuances. For example, an EU Horizon funded project tested the use of an avatar that analyzed people’s non-verbal micro-gestures and verbal communication to determine the traveler’s intention to deceive. However, the bias of these technologies meant that there was a risk they would misinterpret cultural signifiers that do not match the data that was used to train them. AI systems used in the context of migration require an even higher level of inclusivity in order for them to truly protect people on the move.
📖 From our Living Dictionary:
A helpful link on Anthropomorphism
👇 Learn more about why it matters in AI Ethics via our Living Dictionary.
🌐 From elsewhere on the web:
Paris Peace Forum – Call for Projects on AI in International Development
Tomorrow (27 May at 11:59 p.m. Paris Time) is the deadline for the Paris Peace Forum’s 2022 “Call for Solutions.” This year, the Forum seeks policy projects on the theme of “Leveraging AI for Economic Development in the Global South.”
While the notice is short, the application is straightforward. If your organization has a project that fits this theme and would benefit from the support of the Forum’s international network, apply here.
💡 In case you missed it:
Challenges of AI Development in Vietnam: Funding, Talent and Ethics
Vietnam in 2020 overtook Singapore’s gross domestic product (GDP), and became the third largest economy in ASEAN, the Association of Southeast Asian Nations. Immediately after the new national leadership was elected at the Communist Party of Vietnam’s Congress in January 2021, President Nguyen Xuan Phuc signed an important document entitled National Strategy on R&D and Application of Artificial Intelligence, or the Strategy Document. The 14-page document outlines plans and initiatives for Vietnam to “promote research, development and application of AI, making it an important technology of Vietnam in the Fourth Industrial Revolution.” Vietnam aims to become “a center for innovation, development of AI solutions and applications in ASEAN and over the world” by 2030.
With ambitious goals, the strategy document provides some directions to where Vietnam should go in the next decade. It shows that it follows China’s and other Asian countries’ footsteps in becoming a techno-developmental state which takes advantage of technological changes for economic developments. While outlining what 16 ministries and the Vietnam Academy of Science and Technology need to do in the next 10 years, the document does not show how other players such as startup founders, civil society, and beneficiaries of AI, common users in Vietnam’s AI economy should do. It also has no mention of the role of AI ethics in this development. Without any consideration to important ethical issues such as privacy and surveillance, bias and discrimination, and the role of human judgment, AI development in the country might only benefit a small group of people, and possibly bring harms to others.
In this op-ed we examine three key issues regarding AI development that any country would have to tackle when joining the AI global race: Funding, Talent and Ethics.
To delve deeper, read the full article here.
Take Action:
We’d love to hear from you, our readers, on what recent research papers caught your attention. We’re looking for ones that have been published in journals or as a part of conference proceedings.