Discover more from The AI Ethics Brief
AI Ethics Brief #142: OSS AI, fairness uncertainty quantification, impact of ML randomness on group fairness, and more.
What is the most important ethical concern in using AI, differing based on whether you’re primarily in-house or OSS dependent?
Welcome to another edition of the Montreal AI Ethics Institute’s weekly AI Ethics Brief that will help you keep up with the fast-changing world of AI Ethics! Every week, we summarize the best of AI Ethics research and reporting, along with some commentary. More about us at montrealethics.ai/about.
Support our work through Substack
💖 To keep our content free for everyone, we ask those who can, to support us: become a paying subscriber for the price of a couple of ☕.
If you’d prefer to make a one-time donation, visit our donation page. We use this Wikipedia-style tipping model to support our mission of Democratizing AI Ethics Literacy and to ensure we can continue to serve our community.
This week’s overview:
🙋 Ask an AI Ethicist:
What is the value that open-source AI systems provide and what are some key things that we should watch out for in terms of ethical concerns in using those offerings?
✍️ What we’re thinking:
Good futurism, bad futurism: A global tour of augmented collective intelligence
🤔 One question we’re pondering:
What are the characteristics that lead to success in sustainably managing open-source projects?
🪛 AI Ethics Praxis: From Rhetoric to Reality
How should one manage the risks that arise from bringing in externally developed technology?
🔬 Research summaries:
Technological trajectories as an outcome of the structure-agency interplay at the national level: Insights from emerging varieties of AI
On the Impact of Machine Learning Randomness on Group Fairness
Fairness Uncertainty Quantification: How certain are you that the model is fair?
📰 Article summaries:
‘Very scary’: Mark Zuckerberg’s pledge to build advanced AI alarms experts
Dark Corners of the Web Offer a Glimpse at A.I.’s Nefarious Future - The New York Times
Cory Doctorow: What Kind of Bubble is AI? – Locus Online
📖 Living Dictionary:
An example on unintended harms from frontier models
🌐 From elsewhere on the web:
AI and the Afterlife
💡 ICYMI
Democracy, epistemic agency, and AI: Political Epistemology in Times of Artificial Intelligence
🚨 Labor impacts of AI and deepfakes - here’s our quick take on what happened last week.
Labor impacts of AI
At the World Economic Forum (WEF) Davos meeting, the debate on AI and jobs centered on the potential transformative (negative and positive) impact that AI will have going forward.
The International Monetary Fund (IMF) noted that almost 40% of global employment could be disrupted by AI, with 60% of jobs in advanced economies being impacted.
However, it was also highlighted that AI could be a tool for productivity, magnifying what humans do and allowing people to do their jobs better. The discussions emphasized the need for a nuanced approach to supporting AI innovation, with a focus on reskilling the workforce as AI-driven analytics reshape various sectors.
There were also calls for global governance of AI and concerns about the potential for AI to exacerbate inequality.
Deepfakes
Recently, explicit deepfake images of pop star Taylor Swift went viral on social media, particularly on platform X, amassing millions of views before being taken down. The incident has highlighted the growing issue of deepfakes, or "synthetic media" images, which have seen a significant increase in dissemination online.
In response, X blocked searches related to Taylor Swift to curb the spread of these images, although some false images continued to circulate. This incident has sparked renewed calls for stronger legislation around AI misuse, particularly for sexual harassment. Furthermore, it has underscored the challenges tech companies face in removing deepfakes from their platforms.
Did we miss anything?
🙋 Ask an AI Ethicist:
Every week, we’ll feature a question from the MAIEI community and share our thinking here. We invite you to ask yours, and we’ll answer it in the upcoming editions.
One of our readers, A.B., wrote in asking us an important question about the value that open-source AI systems provide and what are some key things that we should watch out for in terms of ethical concerns in using those offerings?
Value of Open-Source AI Systems
Economic and Social Value: This is quantified in terms of both supply-side (cost to recreate the software) and demand-side (value to firms using the software) measures as highlighted in this paper. The demand-side value is particularly notable, indicating substantial savings for companies that would otherwise have to develop similar technologies in-house.
Innovation and Collaboration: OSS facilitates widespread innovation and collaboration. By being freely available, it allows for collective problem-solving and accelerates the pace of technological advancements. This is especially pertinent in AI, where collaborative efforts can lead to significant breakthroughs.
Accessibility and Skill Development: OSS AI systems democratize access to cutting-edge technology. They enable individuals and organizations with limited resources to participate in AI development, fostering a broader skill base and innovation ecosystem.
Ethical Concerns in Using Open-Source AI Systems
Quality and Security Risks: The open-source nature of these systems might lead to variations in quality and potential security vulnerabilities. Given their wide usage, any security flaws can have far-reaching impacts.
Misuse and Ethical Implications: There's a risk of misuse of AI technologies, especially when they are easily accessible through open source. This includes using AI for unethical purposes or in ways that could harm individuals or society, e.g., they can be used to create deepfakes.
Contribution Inequality: Often there is a significant concentration of contributions from a small number of developers, known as core maintainers. This could lead to a lack of diversity in perspectives and potential biases in the AI systems developed.
Maintenance and Sustainability: Open-source projects often struggle with long-term maintenance and sustainability, particularly when there is a lack of funding and the core maintainers invest their personal time in developing the software. This can lead to outdated or abandoned projects, which might be risky if they are integral to critical systems.
Given the above, what is the most important ethical concern and does it differ based on whether you’re primarily in-house or OSS dependent? Please let us know! Share your thoughts with the MAIEI community:
✍️ What we’re thinking:
Good futurism, bad futurism: A global tour of augmented collective intelligence
Collective intelligence (CI) is also called “wisdom of crowds” – it is an emergent ability of a group to “find more or better solutions…than would be found by its members working individually.” There are three critical applications of CI to understanding our future with AI: prediction markets, governance, and open-sourced AI.
To delve deeper, read the full article here.
🤔 One question we’re pondering:
Given all the tremendous value that OSS AI can provide, are there examples of projects where there has been long-term sustainability in the core maintainer group and what are the characteristics that led to its success?
We’d love to hear from you and share your thoughts with everyone in the next edition:
🪛 AI Ethics Praxis: From Rhetoric to Reality
With the focus this week on the benefits and risks of open-source AI systems, for those organizations that are investing heavily in integrating OSS models, how should they manage the risks that arise from bringing in externally developed technology?
We’ve helped a few government and private organizations think through this problem and primarily focused on the following elements as a starting point:
1. Technical Robustness
A. Continuous Monitoring and Validation: The dynamic nature of AI models, especially those that learn continuously, necessitates ongoing monitoring. Techniques like drift detection and real-time model validation become essential. This also includes the adoption of 'Canary Models' – smaller, more controlled models that run parallel to the primary system to anticipate potential malfunctions or biases.
B. Explainability and Audit Trails: Given the increasing complexity of AI models, ensuring transparency is paramount. Techniques for model interpretability and the implementation of robust logging and audit trails ensure traceability and accountability, essential for both debugging and regulatory compliance.
2. Ethical Governance
A. Diverse Stakeholder Engagement: Involving a diverse range of stakeholders, including those from non-technical backgrounds, ensures a broader perspective on the potential impacts of AI. This can draw from organizational behavior and design thinking, encouraging a holistic view of AI deployment that considers societal and human impacts.
B. Continuous Education and Training: Given the rapid development of AI, continuous education for those involved in AI development and deployment is crucial. This involves not just technical training but also education in ethics, societal impact, and legal compliance for the staff who will be using this externally-sourced technology.
3. Socio-Technological Resilience
A. Robust Contingency Planning: Drawing from organizational behavior, develop comprehensive contingency plans for potential AI failures or unintended consequences. This involves not only technical fail-safes but also organizational processes for rapid response and mitigation.
B. Internal Engagement and Transparency: Building trust with the staff by maintaining transparency about AI operations and decision-making processes. This includes clear communication strategies and internal engagement initiatives to discuss and address societal concerns and expectations regarding AI.
You can either click the “Leave a comment” button below or send us an email! We’ll feature the best response next week in this section.
🔬 Research summaries:
The paper examines the technological trajectories of AI in Canada and China, showing that these are the outcome of the structure-agency interplay at the national level. Tracing the development and diffusion of AI from its inception point, it provides an in-depth account of its different trajectories across the two geographical contexts, highlighting the emergence of varieties of AI at the national level.
To delve deeper, read the full summary here.
On the Impact of Machine Learning Randomness on Group Fairness
Statistical measures for group fairness in machine learning reveal significant variance among different training instances, i.e., simply retraining the model under a different random seed can lead to dramatic changes in the experienced bias. In our research, we delve into the impact of randomness in model training on this variability and emphasize the critical role of data order during training and its influence on group fairness.
To delve deeper, read the full summary here.
Fairness Uncertainty Quantification: How certain are you that the model is fair?
Designing fair Machine Learning (ML) algorithms has garnered significant attention in the recent past to make standard ML algorithms robust against prejudiced decision-making in sensitive applications like judiciary systems, medical applications, and college admissions. Training a fair ML algorithm requires solving an underlying constrained optimization problem. Stochastic optimization algorithms like Stochastic Gradient Descent (SGD) are routinely used to train such a model. The randomness of the training data distribution manifests through the data stream that SGD is trained on and renders the learned model and its fairness properties random. Even if the algorithm is fair on average, a large variation in the algorithm outputs can have far-reaching consequences in the above applications. So, besides the average fairness level, it is important for a practitioner to know the uncertainty in the fairness level of a fair algorithm. This paper takes a step toward quantifying this uncertainty.
To delve deeper, read the full summary here.
📰 Article summaries:
‘Very scary’: Mark Zuckerberg’s pledge to build advanced AI alarms experts
What happened: Mark Zuckerberg, CEO of Meta, has announced plans to develop an Artificial General Intelligence (AGI) system that matches or exceeds human intelligence levels. Moreover, he intended to make this AGI system open and accessible to developers outside Meta. In a Facebook post, Zuckerberg emphasized the need for full general intelligence in the next generation of tech services.
Why it matters: The prospect of open-sourcing AGI has raised concerns among experts, with Professor Wendy Hall, a UN advisory body member on AI, deeming it "really very scary" and criticizing Zuckerberg's approach as irresponsible. The fear revolves around the potential harm such powerful AI systems could cause if not properly regulated. While Zuckerberg did not provide a timeline for AGI development, critics argue that introducing open-source AGI before establishing regulatory frameworks poses significant risks to public safety.
Between the lines: Meta's previous decision to open source Llama 2 faced criticism for providing a template that some likened to building a nuclear bomb. Other prominent AI developers, such as OpenAI and Google DeepMind, are also working on AGI, with varying estimates of its potential realization. While Zuckerberg didn't specify the timeline for AGI, Meta's substantial investment in infrastructure and AI processing chips suggests ongoing efforts in this direction. The announcement adds to the ongoing discourse about responsible development and regulation of powerful AI systems.
Dark Corners of the Web Offer a Glimpse at A.I.’s Nefarious Future - The New York Times
What happened: In October, during a Louisiana parole board meeting discussing the potential release of a convicted murderer, online trolls on 4chan targeted a mental health expert who testified by using AI tools to create manipulated images making her appear naked. This incident was part of a broader trend on 4chan, where AI-powered tools like audio editors and image generators were employed to spread racist and offensive content about individuals appearing before the parole board, as documented by Daniel Siegel, a Columbia University graduate student.
Why it matters: The exploitation of AI tools on fringe platforms like 4chan serves as an early warning for the potential misuse of new technologies to propagate extreme ideas. Fringe sites, especially ones like 4chan, are frequented by tech-savvy individuals who quickly adopt emerging technologies to project their ideologies into mainstream spaces. The use of AI tools for generating fake pornography and manipulating voices raises concerns about online harassment and hate campaigns, prompting the need for regulatory and technological interventions to address these issues.
Between the lines: Meta's strategy of releasing AI software code to researchers, known as "open source," became evident when its language model, Llama, leaked onto 4chan after being distributed to select researchers. This incident demonstrated how technologically savvy users on 4chan could tweak open-source AI tools for various purposes, such as creating chatbots with antisemitic ideas. The misuse of Meta's released code highlighted the challenges of balancing responsibility and openness in developing and distributing AI models, as these tools can be adapted for harmful purposes by online communities.
Cory Doctorow: What Kind of Bubble is AI? – Locus Online
What happened: Cory Doctorow argues that the field of AI is experiencing a bubble akin to classic tech bubbles. He observes a proliferation of AI-related advertisements, business plans, and headlines in tech hubs like San Francisco and mainstream media, emphasizing the widespread use of the term "AI" even in businesses without AI applications. Doctorow parallels historical tech bubbles, noting that the current AI bubble is characterized by massive investor subsidies and a surge in interest, with people engaging in playful communities around AI tools.
Why it matters: This article explores the historical aftermaths of tech bubbles, distinguishing between those that left behind valuable remnants and those that did not. While acknowledging the presence of fraud in the AI space, Doctorow speculates on whether the AI bubble will leave something of value. He highlights the high costs associated with large AI models, questioning whether the potential paying customers for these models will sustain the industry. The AI business model, focused on productivity enhancement through automation, is scrutinized for its potential negative impacts on workers and the quality of products.
Between the lines: This piece reflects on the potential outcomes when the AI bubble eventually bursts. Doctorow anticipates that smaller AI models may persist, driven by enthusiasts who have gained valuable skills during the bubble. However, the sustainability of these models depends on the availability of resources from the larger, more capital-intensive AI models. The discussion extends to promising avenues like federated learning, the potential democratization of AI tools, and the need for policymakers to consider the consequences if the AI bubble does not pop. The article also raises questions about the salvageable aspects and long-term impacts of the current AI enthusiasm.
📖 From our Living Dictionary:
An example on unintended harms from frontier models
👇 Learn more about why it matters in AI Ethics via our Living Dictionary.
🌐 From elsewhere on the web:
Readers of our newsletter shared this very interesting workshop on AI and the Afterlife being held at CHI 2024 in a few months:
Recent advances in machine learning (particularly the advent of increasingly capable, general, and multimodal generative AI models) raise profound questions that we must grapple with as a society over the coming years. These technologies will pervade, and potentially transform, a wide array of socio-technical practices. There is already widespread discussion of how AI might transform diverse aspects of modern society such as education, employment, entertainment, scientific inquiry, and military strategy. Practices around death and dying are an oft-overlooked area of cultural importance that stands to be profoundly impacted by this changing technological landscape.
AI technologies are likely to impact an array of existing practices (and give rise to a host of novel ones) around end-of-life planning, remembrance, and legacy in ways that will have profound legal, economic, emotional, and religious ramifications. Already, we are beginning to see reports describing individuals’ attempts to use AI to interactively memorialize loved ones, the use of AI to posthumously complete unfinished creative works, and start-up companies professing to offer AI-based digital afterlife services.
At this critical moment of technological change, there is an opportunity for the HCI community to shape the discourse on this important topic, much as HCI scholarship helped shape (and understand) practices regarding digital legacy and social media. We advocate for a value-sensitive and community-centered approach to designing interfaces, interactions, and systems that will empower people to shape their digital legacies.
To delve deeper, read more details here.
💡 In case you missed it:
Democracy, epistemic agency, and AI: Political Epistemology in Times of Artificial Intelligence
Democratic theories assume that citizens must have some form of political knowledge to vote for representatives or to engage directly in democratic deliberation and participation. However, apart from widespread attention to fake news and misinformation, considerably less attention has been paid to how citizens should acquire that political knowledge in contexts shaped by artificial intelligence and related digital technologies. In this article, Mark Coeckelbergh argues, through the lens of political epistemology, that artificial intelligence (AI) threatens democracy, as it risks diminishing citizens’ epistemic agency and thereby undermines the relevant political agency needed in a democracy.
To delve deeper, read the full article here.
Take Action:
We’d love to hear from you, our readers, on what recent research papers caught your attention. We’re looking for ones that have been published in journals or as a part of conference proceedings.