Discover more from The AI Ethics Brief
AI Ethics Brief #110: Fair and XAI, critiques of hegemonic ML, promises and challenges of causality in ethical ML, and more ...
What are the risks of demographic data collection in the pursuit of fairness?
Welcome to another edition of the Montreal AI Ethics Institute’s weekly AI Ethics Brief that will help you keep up with the fast-changing world of AI Ethics! Every week we summarize the best of AI Ethics research and reporting, along with some commentary. More about us at montrealethics.ai/about.
⏰ This week’s Brief is a ~38-minute read.
Support our work through Substack
💖 To keep our content free for everyone, we ask those who can, to support us: become a paying subscriber for as little as $5/month. If you’d prefer to make a one-time donation, visit our donation page. We use this Wikipedia-style tipping model to support the democratization of AI ethics literacy and to ensure we can continue to serve our community.
NOTE: When you hit the subscribe button, you may end up on the Substack page where if you’re not already signed into Substack, you’ll be asked to enter your email address again. Please do so and you’ll be directed to the page where you can purchase the subscription.
This week’s overview:
✍️ What we’re thinking:
Can an AI be sentient? Cultural perspectives on sentience and on the potential ethical implications of the rise of sentient AI.
🔬 Research summaries:
Rethinking Fairness: An Interdisciplinary Survey of Critiques of Hegemonic ML
(Re)Politicizing Digital Well-Being: Beyond User Engagements
An Algorithmic Introduction to Savings Circles
Promises and Challenges of Causality for Ethical Machine Learning
Demographic-Reliant Algorithmic Fairness: Characterizing the Risks of Demographic Data Collection and Use in the Pursuit of Fairness
Fair and explainable machine learning under current legal frameworks
📰 Article summaries:
The Exploited Labor Behind Artificial Intelligence
How to Protect Yourself If Your School Uses Surveillance Tech
Don’t Assume China’s AI Regulations Are Just a Power Play
📖 Living Dictionary:
What does market fundamentalism have to do with AI ethics?
🌐 From elsewhere on the web:
How efficient code increases sustainability in the enterprise
2022 Innovations Dialogue: AI Disruption, Peace, and Security
2022 Tech Ethics Symposium How Can Algorithms Be Ethical? Finding Solutions through Dialogue
💡 ICYMI
Regional Differences in Information Privacy Concerns After the Facebook-Cambridge Analytica Data Scandal
But first, our call-to-action this week:
Now I'm Seen: An AI Ethics Discussion Across the Globe
We are hosting a panel discussion to amplify the approach to AI Ethics in the Nepalese, Vietnamese and Latin American contexts.
This discussion aims to amplify the enriching perspectives within these contexts on how to approach common problems in AI Ethics. The event will be moderated by Connor Wright (our Partnerships Manager), who will guide the conversation to best engage with the different viewpoints available.
This event will be online via Zoom. The Zoom link will be sent 2-3 days prior to the event taking place.
✍️ What we’re thinking:
Given the events with LaMDA, Connor Wright argues that LaMDA is not sentient, drawing on teachings from the philosophy of ubuntu. He then analyses the ethical implications of sentient AI before offering his concluding thoughts.
To delve deeper, read the full article here.
🔬 Research summaries:
Rethinking Fairness: An Interdisciplinary Survey of Critiques of Hegemonic ML
This survey article assesses and compares existing critiques of current fairness-enhancing technical interventions in machine learning (ML) from a range of non-computing disciplines, including philosophy, feminist studies, critical race and ethnic studies, legal studies, anthropology, and science and technology studies. It bridges epistemic divides in order to offer an interdisciplinary understanding of the possibilities and limits of hegemonic computational approaches to ML fairness for producing just outcomes for society’s most marginalized.
To delve deeper, read the full summary here.
(Re)Politicizing Digital Well-Being: Beyond User Engagements
Concerns surrounding digital well-being have grown in recent years. To date, the issue has largely been studied in terms of individual user engagement and a vague notion of ‘time well spent’. Our paper argues that this is empirically and ideologically insufficient. We instead show how digital well-being ought to be recognized as a culturally relative notion, reflective of uneven societal pressures felt by situated individuals in online spheres. Our paper highlights the limits of user engagement metrics as a singular proxy for user well-being and suggests new ways for practitioners to attend to digital well-being based on its structural dimensions. Overall, we hope to reinvigorate the issue of digital well-being as a nexus point of political concern, through which multiple disciplines can study experiences of digital distress as symptomatic of wider social inequalities and sociotechnical relations of power.
To delve deeper, read the full summary here.
An Algorithmic Introduction to Savings Circles
Rotating savings and credit associations (roscas) are informal financial organizations common in contexts where communities have little access to formal financial services. This paper builds on tools from algorithmic game theory to show that roscas are effective at allocating financial resources; this property helps explain their prevalence.
To delve deeper, read the full summary here.
Promises and Challenges of Causality for Ethical Machine Learning
This paper investigates the practical and epistemological challenges of applying causality for fairness evaluation. It focuses on two key aspects: nature and timing of the interventions on sensitive attributes such as race, gender, etc. and discusses the impact of this specification on the soundness of the causal analyses. Further, it illustrates how proper application of causality can address the limitations of existing fairness metrics, including those that depend upon statistical correlations.
To delve deeper, read the full summary here.
Most current algorithmic fairness techniques require access to demographic data (such as race, gender, or sexuality) in order to make performance comparisons and standardizations across groups. These demographic-based algorithmic fairness techniques look to overcome discrimination and social inequality with novel metrics operationalizing notions of fairness and by collecting the requisite data, often removing broader questions of governance and politics from the equation. In this paper, we argue that collecting more data in support of fairness is not always the answer and can actually exacerbate or introduce harm for marginalized individuals and groups, and discusses two paths forward that can mitigate the risks discussed.
To delve deeper, read the full summary here.
Fair and explainable machine learning under current legal frameworks
Powerful machine learning models can automatize decisions in critical areas of human lives, such as criminal pre-trial detention and hiring. These models are often trained on large datasets of historical decisions. However, past discriminatory human behavior may have tainted these decisions and datasets with discimination. It is imperative to ask how can we ensure that models trained on such datasets do not discriminate against a certain race, gender, or protected groups, in accordance with current legal frameworks? We provide an answer based on our research publication, which was recently accepted to a premier conference on fairness, accountability, and transparency, ACM FAccT.
To delve deeper, read the full summary here.
📰 Article summaries:
The Exploited Labor Behind Artificial Intelligence
What happened: When discussing AI, the millions of underpaid workers worldwide performing repetitive tasks under dangerous labor conditions are often overlooked. Unlike the “AI researchers” being paid six-figure salaries in Silicon Valley corporations, the exploitation of these impoverished workers is not central to the discourse surrounding the ethical development and deployment of AI.
Why it matters: Companies hire people from poor and underserved communities, such as refugees and incarcerated people, often hiring them through third-party firms as contractors. Besides experiencing a traumatic work environment with insufficient mental health support, these workers are monitored and punished if they deviate from their prescribed repetitive tasks.
Between the lines: While more employers should hire from vulnerable groups, it is wrong to do it in a predatory manner with no protections. Researchers in ethical AI have mostly focused on “debiasing” data and fostering transparency and model fairness. Still, the authors of this article argue that stopping the exploitation of labor in the AI industry should be at the heart of such initiatives.
How to Protect Yourself If Your School Uses Surveillance Tech | WIRED
What happened: Thousands of school districts use monitoring software to track students’ online searches and scan their emails. In the US, schools require the Children’s Internet Protection Act to have web filtering in place to prevent students from accessing harmful material online. Still, they are not required to implement advanced technologies and monitoring software.
Why it matters: Technology companies claim to be able to protect students and prevent violence, yet monitoring software has been used to reveal students' sexuality without their consent. It should also be noted that low-income, Black, and Hispanic students are disproportionately exposed to surveillance and discipline. This article advises assuming that everything you do is being scanned and logged by monitoring software.
Between the lines: “Now is a critical moment for students, parents, and communities to help shape the future of safety in schools.” Congress passed a law that directs $300 million for schools to strengthen security infrastructure, so it is critical to identify what safety means for a community.
Don’t Assume China’s AI Regulations Are Just a Power Play - Lawfare
What happened: Back in March 2022, new regulations entered into effect in China that require companies deploying recommendation algorithms to file details about those algorithms with the Cyberspace Administration of China (CAC). Summaries of 30 recommendation algorithms used by some of China’s largest tech companies were published by the CAC in August 2022. This release led to flawed commentary on China’s unprecedented attempt to regulate certain types of AI, which (for the most part) has framed the goals of the regulations in radical terms.
Why it matters: Much of this commentary, which represents a widespread assumption that the primary function of the new regulations is to provide a government guise for collecting detailed technical information from tech companies, mischaracterizes the true impact of the regulations. An alternative line of commentary assumes that the goal of the new regulations is to enable direct government management of the algorithms themselves. However, this article points out that an overreliance on these two lines of commentary misses the purposes that laws can serve.
Between the lines: “Law often plays an expressive function by communicating information and by making certain forms of coordination easier.” The CAC’s regulations on recommendation systems communicate information about the central government’s priorities, which can shift company behavior without any meaningful enforcement actions. Moreover, the broadness of regulation may also signify that the Chinese government is attempting to create a sizeable discretionary range.
📖 From our Living Dictionary:
What does market fundamentalism have to do with AI ethics?
👇 Learn more about why it matters in AI Ethics via our Living Dictionary.
🌐 From elsewhere on the web:
How efficient code increases sustainability in the enterprise
Our founder, Abhishek Gupta, shared his thoughts on building greener software systems through better measurement and applying the ideas of carbon efficiency, hardware efficiency, and carbon awareness which are encapsulated in the work he leads at the Green Software Foundation towards the Software Carbon Intensity specification.
2022 Innovations Dialogue: AI Disruption, Peace, and Security
Our founder, Abhishek Gupta, will be speaking at this event hosted by UNIDIR on “What even is, AI? – The State of Play and the Future of AI“. This session will provide a foundational understanding of the concept of AI, the state of play of AI technologies and their most important functionalities. It will also reflect on some of the current obstacles to and opportunities for advancement and where AI is headed in the future.
2022 Tech Ethics Symposium How Can Algorithms Be Ethical? Finding Solutions through Dialogue
Our founder, Abhishek Gupta, will be speaking at this event hosted by Duquesne University on “How Can Social Institutions Work Toward Ethical Outcomes in Tech? The Needs and Hopes for Better Tech Governance, Policy, and Transparency”.
💡 In case you missed it:
While there is increasing global attention to data privacy, most of their understanding is based on research conducted in a few countries of North America and Europe. This paper proposes an approach to studying data privacy over a larger geographical scope. By analyzing Twitter content about the #CambridgeAnalytica scandal, we observe language and regional differences on privacy concerns that hint a need for extensions of current information privacy frameworks.
To delve deeper, read the full summary here.
Take Action:
We’d love to hear from you, our readers, on what recent research papers caught your attention. We’re looking for ones that have been published in journals or as a part of conference proceedings.