AI Ethics Brief #104: Moral dilemmas for moral machines, community norms for foundation models, limited Brussels effect of EU AI Act, and more ...
Who funds misinformation?
Welcome to another edition of the Montreal AI Ethics Institute’s weekly AI Ethics Brief that will help you keep up with the fast-changing world of AI Ethics! Every week we summarize the best of AI Ethics research and reporting, along with some commentary. More about us at montrealethics.ai/about.
⏰ This week’s Brief is a ~24-minute read.
Support our work through Substack
💖 To keep our content free for everyone, we ask those who can, to support us: become a paying subscriber for as little as $5/month. If you’d prefer to make a one-time donation, visit our donation page. We use this Wikipedia-style tipping model to support the democratization of AI ethics literacy and to ensure we can continue to serve our community.
*NOTE: When you hit the subscribe button, you may end up on the Substack page where if you’re not already signed into Substack, you’ll be asked to enter your email address again. Please do so and you’ll be directed to the page where you can purchase the subscription.
This week’s overview:
🔬 Research summaries:
The Struggle for AI’s Recognition: Understanding the Normative Implications of Gender Bias in AI with Honneth’s Theory of Recognition
Embedded ethics: a proposal for integrating ethics into the development of medical AI
Who Funds Misinformation? A Systematic Analysis of the Ad-related Profit Routines of Fake News sites
Zoom Out and Observe: News Environment Perception for Fake News Detection
Moral Dilemmas for Moral Machines
Explainable artificial intelligence (XAI) post‐hoc explainability methods: risks and limitations in non‐discrimination law
📰 Article summaries:
The Time Is Now to Develop Community Norms for the Release of Foundation Models
The EU AI Act Will Have Global Impact, but a Limited Brussels Effect
📖 Living Dictionary:
What is the relevance of supervised learning to AI ethics?
🌐 From elsewhere on the web:
GlobalPolicy.AI: promoting the development and implementation of trustworthy artificial intelligence (AI)
💡 ICYMI
Moral consideration of nonhumans in the ethics of artificial intelligence
But first, our call-to-action this week:
We’d love to hear from you, our readers, on what recent research papers caught your attention. We’re looking for ones that have been published in journals or as a part of conference proceedings.
🔬 Research summaries:
The effect gender bias in AI has on women’s self-worth is not currently considered by ethical guidelines. Hence, exploring AI systems systemically as well as systematically proves crucial in exposing this truth.
To delve deeper, read the full summary here.
Embedded ethics: a proposal for integrating ethics into the development of medical AI
Though AI ethics frameworks are plenty, applicable ethics guidance for AI developers remains few and far between. To translate high-level ethics guidelines into practice, the authors of this paper argue ethics ought to be embedded into every stage of the AI lifecycle.
To delve deeper, read the full summary here.
Who Funds Misinformation? A Systematic Analysis of the Ad-related Profit Routines of Fake News sites
Fake news is an age-old phenomenon, widely assumed to be associated with political propaganda published to sway public opinion. Despite many studies performed and countermeasures deployed from researchers and stakeholders, unreliable news sites have increased their share of engagement among the top performing news sources in recent years. In this study, we shed light on the revenue flows of fake news sites by investigating who supports and maintains their existence.
To delve deeper, read the full summary here.
Zoom Out and Observe: News Environment Perception for Fake News Detection
Recent years have witnessed a surge of fake news detection methods that focus on either post contents or social contexts. In this paper, we provide a new perspective—observing fake news in the “news environment”. With the news environment as a benchmark, we evaluate two factors much valued by fake news creators, the popularity and the novelty of a post, to boost the detection performance.
To delve deeper, read the full summary here.
Moral Dilemmas for Moral Machines
Researchers focusing on implementable machine ethics have used moral dilemmas to benchmark AI systems’ ethical decision-making abilities. But philosophical thought experiments are designed to pump human intuitions about moral dilemmas rather than serve as a validation mechanism for determining whether an algorithm ‘is’ moral. This paper argues this misapplication of moral thought experiments can have potentially catastrophic consequences.
To delve deeper, read the full summary here.
The paper argues that the use of post-hoc explanatory methods is useful in many cases, but that these methods have limitations that prohibit reliance as the sole mechanism to guarantee fairness of model outcomes in high-stakes decision-making. By way of an ancillary function, the inadequacy of European Non-Discrimination Law for algorithmic decision-making is demonstrated too.
To delve deeper, read the full summary here.
📰 Article summaries:
The Time Is Now to Develop Community Norms for the Release of Foundation Models
What happened: Foundation models pose significant ethical risks and this work from Stanford outlines what a responsible release process should look like. Release means access to data, code, and models to external researchers. Deployments to end users in products and release for feedback from users is out of scope in this work. At the moment, there are 3 ways releases are managed: staged release (OpenAI), open release (EleutherAI, Meta), and limited API access (Microsoft). This work advocates that instead of each actor working on foundation models rediscovering harms and pitfalls individually, the ecosystem will benefit from sharing best practices and building community norms in an open fashion. Through a comprehensive framework accounting for the what, when, to whom, and how of the release process, this approach they’ve adopted is more so on the “how” to decide on the release process vs. the “what” which can vary based on context and risk.
Why it matters: With more generalized capabilities and potential for unintended uses, especially in combination with other models and capabilities out there, a collective effort towards figuring out a responsible release approach is essential. Ideas such as the use of a model review board that provides feedback on the release strategy and recommends best practices is an interesting addition to the function of regular IRBs. The article provides insights into desirable compositions of such a board with a view to helping make contextual decisions. Often there is vagueness in terms of what should be included in the research proposal to such a board and how the process should be carried out, but the article provides some clarity in terms of the various tradeoffs that might arise in adopting different approaches from the perspective of accountability, transparency, and risks posed through the review process and its own transparency (or lack thereof).
Between the lines: As we continue to see more investments in the AI ethics domain, making sure that we have parallel, operational investments in the process and governance approaches that undergird the other initiatives will be equally important if we’re to bridge the principles to practice gap that exists today in the operationalization of AI ethics, both across academia and industry. While some may argue around the use of the phrasing “foundation models,” the advice provided in this article serves as a great starting point for the community to figure out norms and best practices that will have a long-run impact on the research and production landscape involving these capabilities.
The EU AI Act Will Have Global Impact, but a Limited Brussels Effect
What happened: The Brussels effect refers to the impact that policies developed in the EU tend to have globally through market mechanisms. In this detailed article, the authors argue for how the EU AIA will have a global impact through reshaping some norms and approaches, but a limited Brussels Effect, as was the case with the GDPR. It captures both the de facto (through practice) and de jure (through law) effects that will emerge varying based on the use cases and inter-governmental, -agency, and -body interactions as AI systems are brought to market. Global implications will come from 3 categories of AI requirements in the AIA: high-risk AI in products, human services, and AI transparency requirements. Government engagement through trade council working groups and standards setting will also have an impact. In terms of thinking about how AI systems are used in human services, we get a very useful example in the article on LinkedIn and how its services are presented globally and are high-impact from the AIA’s perspective in the sense that they recommend jobs and are involved in the hiring/recruitment process - if there are incongruous laws passed in other jurisdictions, Microsoft (owner of LinkedIn) will have to make differentiated offerings for each of those jurisdictions and hence they will most likely resist in doing so leading to other jurisdictions as well falling in line with what’s proposed in the EU - a demonstration of the de facto and de jure effects.
Why it matters: One of the expected outcomes is that there will be a boost in transparency in how AI systems operate. For products already covered under product safety liability, the AIA only changes the specifics of oversight but not the scope or the process. Extra requirements on this front will lead to lasting changes in manufacturing that will perhaps lead to a de jure effect outside of the EU as well due to high costs / investments in making changes to the production efforts to meet the stipulated requirements. In a product liability landscape, the standards that will be applied are largely delegated to the European Standards Organizations which crucially work already with actors from around the world and hence the impact will be well-managed already through the input of a global set of actors who will keep their own legal needs and guidelines in mind as well in shaping these standards.
Between the lines: One of the key points is that the more an AI system is integrated into a dispersed platform, the more pronounced the Brussels effect will be. Disclosure of interactions with AI though will be minor in terms of changes required but would end up as the more visible impact of the AIA, akin to the cookie notices that we see on websites today as a consequence of the GDPR.
📖 From our Living Dictionary:
What is the relevance of supervised learning to AI ethics?
👇 Learn more about why it matters in AI Ethics via our Living Dictionary.
🌐 From elsewhere on the web:
Our founder, Abhishek Gupta, spoke on a panel about the need for global cooperation on AI governance and policymaking bringing in his work with the Paris Peace Forum titled “Beyond the North-South Fork on the Road to AI Governance: An Action Plan for Democratic & Distributive Integrity“.
💡 In case you missed it:
Moral consideration of nonhumans in the ethics of artificial intelligence
As AI becomes increasingly impactful to the world, the extent to which AI ethics includes the nonhuman world will be important. This paper calls for the field of AI ethics to give more attention to the values and interests of nonhumans. The paper examines the extent to which nonhumans are given moral consideration across AI ethics, finds that attention to nonhumans is limited and inconsistent, argues that nonhumans merit moral consideration, and outlines five suggestions for how this can better be incorporated across AI ethics.
To delve deeper, read the full article here.
Take Action:
We’d love to hear from you, our readers, on what recent research papers caught your attention. We’re looking for ones that have been published in journals or as a part of conference proceedings.