Discover more from The AI Ethics Brief
AI Ethics Brief #127: Stop ignoring your stakeholders, CV and surveillance, justice in misinformation detection, and more.
How will EU regulation impact the global AI markets?
Welcome to another edition of the Montreal AI Ethics Institute’s weekly AI Ethics Brief that will help you keep up with the fast-changing world of AI Ethics! Every week we summarize the best of AI Ethics research and reporting, along with some commentary. More about us at montrealethics.ai/about.
Support our work through Substack
💖 To keep our content free for everyone, we ask those who can, to support us: become a paying subscriber for the price of a couple of ☕.
If you’d prefer to make a one-time donation, visit our donation page. We use this Wikipedia-style tipping model to support our mission of Democratizing AI Ethics Literacy and to ensure we can continue to serve our community.
This week’s overview:
🙋 Ask an AI Ethicist:
What will change for an organization now that ChatGPT Enterprise has launched?
✍️ What we’re thinking:
Stop Ignoring Your Stakeholders
Computer vision, surveillance, and social control
🤔 One question we’re pondering:
What will it take for AI systems to really make an impact on patient outcomes?
🔬 Research summaries:
The Brussels Effect and AI: How EU Regulation will Impact the Global AI Market
Justice in Misinformation Detection Systems
Contextualizing Artificially Intelligent Morality: A Meta-Ethnography of Top-Down, Bottom-Up, and Hybrid Models for Theoretical and Applied Ethics in Artificial Intelligence
📰 Article summaries:
Why Improving AI Reliability Metrics May Not Lead to Reliability - Center for Security and Emerging Technology
The AI rules that US policymakers are considering, explained
We Need Smart Intellectual Property Laws for Artificial Intelligence - Scientific American
📖 Living Dictionary:
What is the relevance of open-source to AI ethics?
🌐 From elsewhere on the web:
10 top resources to build an ethical AI framework
💡 ICYMI
Examining the Impact of Provenance-Enabled Media on Trust and Accuracy Perceptions
🤝 You can now refer your friends to The AI Ethics Brief!
Thank you for reading The AI Ethics Brief — your support allows us to keep doing this work. If you enjoy The AI Ethics Brief, it would mean the world to us if you invited friends to subscribe and read with us. If you refer friends, you will receive benefits that give you special access to The AI Ethics Brief.
How to participate
1. Share The AI Ethics Brief. When you use the referral link below, or the “Share” button on any post, you'll get credit for any new subscribers. Simply send the link in a text, email, or share it on social media with friends.
2. Earn benefits. When more friends use your referral link to subscribe (free or paid), you’ll receive special benefits.
Get a 3-month comp for 25 referrals
Get a 6-month comp for 75 referrals
Get a 12-month comp for 150 referrals
🤗 Thank you for helping get the word out about The AI Ethics Brief!
🚨 The Responsible AI Bulletin
We’ve restarted our sister publication, The Responsible AI Bulletin, as a fast-digest every Sunday for those who want even more content beyond The AI Ethics Brief. (our lovely power readers 🏋🏽, thank you for writing in and requesting it!)
The focus of the Bulletin is to give you a quick dose of the latest research papers that caught our attention in addition to the ones covered here.
🙋 Ask an AI Ethicist:
Every week, we’ll feature a question from the MAIEI community and share our thinking here. We invite you to ask yours and we’ll answer it in the upcoming editions.
Here are the results from the previous edition for this segment:
Somewhat related to the discussion in today’s “One question we’re pondering” segment, AI systems, especially those backed by LLMs, have shown to be pretty useful in overcoming the blank-page syndrome and jumpstarting the ideation process.
Of course, those come with attendant risks of biases, hallucinations, etc., which we’ve covered extensively in previous editions and articles.
And with the recent announcement of ChatGPT Enterprise, we can imagine that there will be many more folks using such systems in their everyday workflows to boost their productivity. Some of those will come out of the Shadow AI woodwork too! One of our readers sent us a last-minute question that we’ve decided to feature in today’s newsletter on what this changes, if anything, for businesses from a governance standpoint.
It’s great that more people are familiarizing themselves with AI tools. The benefit is that we can help attenuate overblown discussions around existential risks and ground them in more concrete questions of how these systems impact our work and where they fall short. Nothing like first-hand experience of hallucinations and bias to jolt us to the real-world concerns of current AI systems. The downside, of course, as we highlight in the Shadow AI article, is that with a lowered barrier to experimentation, risks will run amok in any and every part of the organization.
From a governance standpoint, that is a nightmare! We need robust policies and friendly discussions with staff to guide them on what these systems, even enterprise versions, should be used for and what they should NOT be used for. This can help us harness the potential that such advances offer while limiting the downside that an organization will experience as you have hundreds of employees jumping in to experiment. And remember, there will be downsides that will remain unmitigated - it is just a function of the capability overhang that these systems have!
Have you started using GenAI tools in your workflows? Share your thoughts with the MAIEI community:
✍️ What we’re thinking:
Stop Ignoring Your Stakeholders
Building trust between employees, partners, and suppliers is always tricky. To run functional organizations, leaders can’t just make decisions unilaterally - nor can they share decision-making power equally with everyone. Who should have input, and when is the central challenge of stakeholder management, and AI is shaking things up.
AI tools can now serve as advisors, idea generators, and sounding boards. Such capabilities in AI systems might mean that leaders might not need as much input from people. But does that mean they should leave others out?
MAP IT OUT: AI is changing a lot about how we work together and communicate. Often, experts will say, “stakeholder engagement is the solution.” But how can we tell if we’re doing it correctly?
In VentureBeat, we worked with Steven Mills and Kes Sampanthar to figure it out, starting with a model that leaders can use to evaluate their strategies.
The ladder model (below) was first designed in the ’60s by housing policy analyst Sherry Arnstein, and we’ve updated it for modern times.
It captures the dichotomy between delegation and neglect with many intervening shades.
To delve deeper, read the full article here.
Computer vision, surveillance, and social control
Computer vision technology is inescapably connected to surveillance. As a surveillance tool, computer vision can help governments and companies to exercise social control. Computer vision’s potential for surveillance and social control raises a lot of worries – this blog discusses why.
To delve deeper, read the full article here.
🤔 One question we’re pondering:
With the recent advances in using LLM-based systems towards providing in-context help to medical practitioners, and with the somewhat NOT prescient warning that all radiologists are going to disappear due to AI, we’ve been diving deeper into figuring out how ethical implications will manifest when AI is brought into the clinical practice.
We think that the value that the current crop of systems can provide is in the early, exploratory phases of figuring out the diagnosis for a patient, especially when they don’t have many obvious symptoms, i.e., AI systems can aid in divergent thinking. But, real-world impacts of these systems have been minimal so far. What will it take for AI systems to really make an impact on patient outcomes?
We’d love to hear from you and share your thoughts back with everyone in the next edition:
🔬 Research summaries:
The Brussels Effect and AI: How EU Regulation will Impact the Global AI Market
EU policymakers want the EU to become a regulatory superpower in AI. Will they succeed? While parts of the AI regulation will most likely not diffuse, other parts are poised to have a global impact.
To delve deeper, read the full summary here.
Justice in Misinformation Detection Systems
Despite their adoption on several global social media platforms, the ethical and societal risks associated with algorithmic misinformation detection are poorly understood. In this paper, we consider the key stakeholders that are implicated in and affected by misinformation detection systems. We use and expand upon the theoretical framework of informational justice to explain issues of justice pertinent to these stakeholders within the misinformation detection pipeline.
To delve deeper, read the full summary here.
Ethics in Artificial Intelligence (AI) can emerge in many ways. This paper addresses developmental methodologies, including top-down, bottom-up, and hybrid approaches to ethics in AI, from theoretical, technical, and political perspectives. Examples through case studies of the complexity of AI ethics are discussed to provide a global perspective when approaching this challenging and often overlooked area of research.
To delve deeper, read the full summary here.
📰 Article summaries:
We Need Smart Intellectual Property Laws for Artificial Intelligence - Scientific American
What happened: The recent introduction of AI chatbots like OpenAI's ChatGPT has prompted notable warnings from various quarters, including political figures and industry leaders. These concerns highlight the wide-ranging impact of AI on society, from workplaces and classrooms to everyday life. The debate also centers around the need for the “three C’s” (consent, credit, and compensation) from content creators for the use of their works in training AI systems.
Why it matters: The emergence of laws and regulations pertaining to AI raises the importance of avoiding a uniform approach that doesn't consider nuanced differences. The ownership of various types of content becomes crucial. For example, the distinction between excluding a popular music recording due to copyright disputes and omitting critical scientific research due to licensing conflicts underscores the need to strike a balance.
Between the lines: A significant challenge arises because intellectual property (IP) rights associated with training data are bound by national jurisdictions, while developing AI services is a global endeavor. This disjuncture could lead to disparities where companies in more restrictive environments struggle against those in more permissive ones. The worldwide nature of AI deployment contrasts with the localized nature of IP regulations, presenting a complex scenario for future AI advancements.
The AI rules that US policymakers are considering, explained
What happened: The field of AI policy in Washington, DC, remains relatively unexplored, with government leaders often presenting vague ideas and seeking public input instead of concrete action plans. The US government's involvement in AI has been largely abstract, expressing a desire for leadership and adoption without clear policies. However, the growing attention to AI, congressional hearings, and industry efforts at self-regulation suggest that more specific actions are imminent. Most of the ideas circulating can fit into one of four rough categories: rules, institutions, money, and people.
Why it matters: The regulation of AI, whether through government mandates or international agreements, is a contentious and critical aspect. Some favor minimal intervention to avoid stifling innovation, while others advocate comprehensive rules addressing liability and bias. Implementing these rules requires capable institutions, which might involve new agencies. Moreover, funding is essential for AI research to expand capabilities and ensure safety. Additionally, a shortage of skilled AI professionals prompts discussions about high-skilled immigration and educational support.
Between the lines: AI experts propose dedicated funding for federal labs to conduct research and develop safety measures. The Center for Security and Emerging Technology CSET) highlights that the main bottleneck in AI progress isn't processing power but ‘intelligent humans’. To address this, experts argue for increasing the number of trained AI professionals through immigration and scholarships. The evolving nature of the AI policy conversation suggests a rapidly changing landscape, potentially leading to shifts in influence and priorities.
What happened: A recent study conducted by the Stanford Intelligent Systems Laboratory, with support from CSET funding, has highlighted the complexity of measuring the reliability of machine learning systems. The study reveals that models might perform well on certain reliability metrics while remaining unreliable in other aspects. This insight challenges the idea of a singular, easily measurable property of AI system reliability. The research explores relationships between different reliability metrics across various datasets and tasks.
Why it matters: Evaluating system robustness becomes complex when real-world assessments are limited. Policymakers and engineers face the task of selecting appropriate metrics for evaluating reliability, with the question of whether good results on specific metrics imply a broader understanding of data performance. The study's investigation into various reliability metrics across tasks emphasizes that AI reliability is multifaceted. The complexity of AI systems requires policymakers to avoid a one-size-fits-all approach to regulating and defining reliability standards.
Between the lines: The findings underline that the notion of reliability is not uniform across different aspects of AI systems. While advancements in AI, particularly involving diverse "pre-training" data, might improve system reliability, there's uncertainty regarding the scope of failure modes that can be addressed through such approaches. The study emphasizes the availability of various tools and metrics for assessing current AI systems' reliability. As policy developments evolve, there's an anticipation for guidance that can assist researchers and practitioners in achieving robust and reliable AI systems.
📖 From our Living Dictionary:
What is the relevance of open-source to AI ethics?
👇 Learn more about why it matters in AI Ethics via our Living Dictionary.
🌐 From elsewhere on the web:
10 top resources to build an ethical AI framework
NIST AI Risk Management Framework (AI RMF 1.0) guides government agencies and the private sector on managing new AI risks and promoting responsible AI. Abhishek Gupta, founder and principal researcher at the Montreal AI Ethics Institute, pointed to the depth of the NIST framework, especially its specificity in implementing controls and policies to better govern AI systems within different organizational contexts.
Customize the ethical AI framework. A generative AI ethics framework should be tailored to a company's own unique style, objectives and risks, without forcing a square peg into a round hole. "Overloaded program implementations," Gupta said, "ultimately lead to premature termination due to inefficiencies, cost overruns and burnouts of staff tasked with putting the program in place." Harmonize ethical AI programs with existing workflows and governance structures. Gupta compared this approach to setting the stage for a successful organ transplant.
Researchers, enterprise leaders and regulators are still investigating ethical issues relating to responsible AI. Legal challenges involving copyright and intellectual property protection will have to be addressed, Gupta predicted. Issues related to generative AI and hallucinations will take longer to address since some of those potential problems are inherent in the design of today's AI systems.
"AI ethics will only grow in importance," Gupta surmised, "and will experience many more overlaps with adjacent fields to strengthen the contributions it can make to the broader AI community."
To delve deeper, read the full article here.
💡 In case you missed it:
Examining the Impact of Provenance-Enabled Media on Trust and Accuracy Perceptions
Emerging technical provenance standards have made it easier to identify manipulated visual content online, but how do users actually understand this type of provenance information? This paper presents an empirical study where 595 users of social media were given access to media provenance information in a simulated feed, finding that while provenance does show effectiveness in lowering trust and perceived accuracy of deceptive media, it can also, under some circumstances, have similar effects on truthful media.
To delve deeper, read the full article here.
Take Action:
We’d love to hear from you, our readers, on what recent research papers caught your attention. We’re looking for ones that have been published in journals or as a part of conference proceedings.