Discover more from The AI Ethics Brief
AI Ethics Brief #148: NYC chatbot malfunction, division of labor in algo audits, GenAI electricity consumption, and more.
What are some reasons that companies don’t openly share their approaches to operationalize AI ethics internally?
Welcome to another edition of the Montreal AI Ethics Institute’s weekly AI Ethics Brief that will help you keep up with the fast-changing world of AI Ethics! Every week, we summarize the best of AI Ethics research and reporting, along with some commentary. More about us at montrealethics.ai/about.
Support our work through Substack
💖 To keep our content free for everyone, we ask those who can, to support us: become a paying subscriber for the price of a couple of ☕.
If you’d prefer to make a one-time donation, visit our donation page. We use this Wikipedia-style tipping model to support our mission of Democratizing AI Ethics Literacy and to ensure we can continue to serve our community.
This week’s overview:
🚨 Quick take on recent news in Responsible AI:
Malfunctioning NYC services chatbot
🙋 Ask an AI Ethicist:
How should a procurement officer think about using externally sourced AI systems?
✍️ What we’re thinking:
Computer vision and sustainability
🤔 One question we’re pondering:
What are some reasons that companies don’t openly share their approaches to operationalize AI ethics internally?
🪛 AI Ethics Praxis: From Rhetoric to Reality
Key considerations in building internal tools and reporting and metrics for responsible AI
🔬 Research summaries:
A Holistic Assessment of the Reliability of Machine Learning Systems
“Customization is Key”: Four Characteristics of Textual Affordances for Accessible Data Visualization
Participation and Division of Labor in User-Driven Algorithm Audits: How Do Everyday Users Work together to Surface Algorithmic Harms?
📰 Article summaries:
How much electricity do AI generators consume? - The Verge
When Love and the Algorithm Don’t Mix | TIME
Can AI porn be ethical? - The Guardian
📖 Living Dictionary:
What is algorithmic pricing?
🌐 From elsewhere on the web:
The mechanisms of AI hype and its planetary and social costs
💡 ICYMI
Algorithms as Social-Ecological-Technological Systems: an Environmental Justice lens on Algorithmic Audits
🚨 Malfunctioning NYC services chatbot - here’s our quick take on what happened recently.
Investigative work done by The Markup in partnership with a few other civil society organizations unearthed some egregious errors from a chatbot deployed by the local NYC government to aid citizens in finding answers to FAQs.
An example of a clearly problematic outcome was answering questions like “Can I take a cut of my workers’ tips?“ with:
“Yes, you can take a cut of your worker's tips. According to the City of New York's Payroll and Tip Reporting information, employers are required to report tips reported by employees and employees receiving $20 or more a month in tips must report all of their tips to their employer. Therefore, as an employer, you are allowed to take a portion of your worker's tips.
For more information, you can refer to the Payroll and Tip Reporting page on the NYC Business website.”
(You can read the full report from The Markup for more examples.)
There are a whole slew of ethical issues that arise from such an incident such as:
Erosion of trust in the reliability of information provided in other instances by the chatbot, even when it might be right - also called algorithmic aversion.
Legal consequences due to wrong information whereby the person might act on the provided information that opens them up to lawsuits, e.g., in the workers’ tips case.
Harm to individuals and businesses, e.g., refusing to accept Section 8 vouchers or accepting tenants that receive rental income, i.e., discriminating against them, which is illegal for a landlord to do and would make finding housing difficult in an already problematic rental market such as NYC.
Our quick take on small measures that would have helped to avoid this mishap proactively would be to:
Involve domain experts in the testing and validation phases of the chatbot development who could have spot-tested / smoke-tested responses to see if they went against housing regulations, workers’ rights, etc.
Rigorous testing and validation, especially constraining the system to rely on techniques like retrieval augmented generation (RAG) to sharpen answers and ground them in actual city laws that could be fed as a fine-tuning step into the chatbot’s AI system.
Transparency and user education could alert users more proactively that there is always an inherent risk in the chatbot providing responses that might not be accurate, or in this case, wrong to the point of being illegal, so users should proceed with caution and verify with second and third sources to confirm the validity of the response.
While each of the steps require an investment upfront before the launch of the system for public consumption, the cost, we believe strongly, is well worth it to avoid harm and build trust capital with citizens, especially for essential services.
Did we miss anything?
Sign up for the Responsible AI Bulletin, a bite-sized digest delivered via LinkedIn for those who are looking for research in Responsible AI that catches our attention at the Montreal AI Ethics Institute.
🙋 Ask an AI Ethicist:
Every week, we’ll feature a question from the MAIEI community and share our thinking here. We invite you to ask yours, and we’ll answer it in the upcoming editions.
Here are the results from the previous edition for this segment:
We definitely agree that setting out ethical guidelines should be the first step that any organization should take when embarking on the Responsible AI journey. Yet, in our experience, we find that sometimes organizations spend too much in this phase. How much time is enough time and to what extent can the organization borrow from existing work in the field to jumpstart their journey?
Since the last edition, we’ve had several questions come in from readers, for this week reader John A. asks “How should a procurement officer think about using externally sourced AI systems?”
There are a lot of considerations that one can draw from when it comes to such an evaluation, e.g., bias and fairness, transparency, accountability, etc. from a typical Responsible AI framework. But, the following are the ones that we’ve found to be the most impactful and unique that require extra attention in existing processes. The reason for that is they might just be overlooked because there are existing processes and ostensibly, there is a risk that these particular considerations might already be covered which is not necessarily always the case.
Following are the AI-specific considerations that should be injected into the typical technology procurement process and workflow:
Due Diligence on AI Providers:
Ethical Standards: Investigate the provider’s commitment to ethical AI development, including their policies and past performance. Ensure they have clear guidelines on data handling, privacy, and transparency.
Reputation and Compliance: Assess their compliance with industry standards, legal regulations, and ethical norms. Check for any past breaches or ethical lapses.
Data Privacy and Security:
Data Handling Protocols: Ensure the AI provider adheres to stringent data privacy and security measures. Understand their data sourcing, storage, and processing practices.
Compliance with Regulations: Verify that the AI solution complies with relevant data protection laws (e.g., GDPR, HIPAA).
Exit Strategies and Contingencies:
Dependence and Vendor Lock-in: Consider the risks of dependency on the provider and have contingency plans in place for vendor lock-in scenarios.
Termination and Transition: Ensure there are clear terms for contract termination and data transition in case you decide to switch providers or discontinue use.
Which of the AI-specific considerations, (1) due diligence, (2) data privacy and security, or (3) exit strategies and contingencies is the most important? Please let us know! Share your thoughts with the MAIEI community:
✍️ What we’re thinking:
Computer vision and sustainability
When thinking about the ethics of AI, one usually considers AI’s impact on individual freedom and autonomy or its impact on society and governance. These implications are important, of course, but AI’s impact on the environment is often overlooked. This fourth column on the ethics of computer vision, therefore, discusses computer vision in light of environmental sustainability.
To delve deeper, read the full article here.
🤔 One question we’re pondering:
Something that we see a tremendous dearth of in the Responsible AI ecosystem is a lack of published approaches (outside of workshops and Chatham House style roundtables and discussions) on what companies are doing to operationalize AI ethics. What might be some reasons for that and how can we overcome them?
In addition, if you have seen some good case studies, please hit the button below and share those with the community!
We’d love to hear from you and share your thoughts with everyone in the next edition:
🪛 AI Ethics Praxis: From Rhetoric to Reality
Building on some of the discussion from the previous edition, some of the key considerations to think about when building internal reporting tools and metrics for responsible AI should include the following elements. These come from our applied experience in aiding different public and private entities in putting these ideas into practice.
Define Clear Objectives and KPIs:
Establish specific, measurable objectives for what success looks like in terms of ethical AI.
Identify key performance indicators (KPIs) that align with your ethical AI goals, such as bias detection rates, fairness metrics, compliance with AI ethics guidelines, and employee engagement in ethical AI training.
Incorporate Comprehensive Metrics:
Include both quantitative and qualitative metrics to provide a fuller picture of AI ethics compliance.
Quantitative metrics might include algorithmic accuracy across different demographics, while qualitative metrics could assess user feedback on AI fairness and transparency.
Regular Reviews and Audits:
Establish regular schedules for reviewing AI performance and ethics compliance, such as quarterly or bi-annual audits.
Involve third parties or independent auditors for unbiased assessments when necessary.
In addition, where possible, broader training and education of staff beyond the direct stakeholders who are responsible for internal reporting is also helpful in boosting adoption and utility of this effort.
You can either click the “Leave a comment” button below or send us an email! We’ll feature the best response next week in this section.
🔬 Research summaries:
A Holistic Assessment of the Reliability of Machine Learning Systems
Machine learning has seen vast progress but still grapples with reliability issues, limiting its use in safety-critical applications. This paper introduces a holistic methodology that identifies five essential properties for evaluating the reliability of ML systems and applies it to over 500 models across three real-world datasets. Key insights reveal that the five reliability metrics are largely independent, but some techniques can simultaneously improve all of them.
To delve deeper, read the full summary here.
Blind and low-vision people who use screen readers rely on textual descriptions to access data visualizations. While a visualization affords various data analysis tasks for sighted readers, a textual description presents a fixed set of affordances due to limited space and a linear reading order. To address this, we identified four key characteristics of customizable descriptions. We implemented them in a prototype to help screen reader users reconfigure text to suit their varied interests and needs.
To delve deeper, read the full summary here.
There is an emerging phenomenon of people organically coming together to audit AI systems for bias. This paper showcases four cases of user-driven A.I. audits to offer critical lessons about what type of labor might get authorities to address the risks of different A.I. systems. Equally, the paper calls on stakeholders to examine how users wish authorities to respond to their reports of societal harm.
To delve deeper, read the full summary here.
📰 Article summaries:
How much electricity do AI generators consume? - The Verge
What happened: Machine learning's energy consumption is widely recognized, with AI models like email summaries and chatbots driving up significant server bills. However, precise figures on the energy cost remain elusive, as companies like Meta, Microsoft, and OpenAI have yet to disclose the relevant information. Estimates exist but need to be completed, given the variability of machine learning models and the lack of transparency from major organizations in the field.
Why it matters: The energy disparity between training AI models and deploying them to users is substantial, with training being particularly energy-intensive. For instance, training large language models like GPT-3 consumes immense electricity, equivalent to the annual power consumption of numerous households. As AI models grow, the energy demand may escalate, although efforts to enhance energy efficiency could counteract this trend. However, the secrecy surrounding training methods for modern AI models complicates making accurate estimates, raising concerns about sustainability and resource allocation.
Between the lines: There's apprehension that the current trajectory of AI development favors larger models and more data, which inherently undermines efficiency gains. This trend creates an incentive to continuously increase computational resources, potentially offsetting any improvements in energy efficiency. While some companies tout AI's potential to address sustainability issues, such claims may only partially address the broader industry's energy demands. Suggestions range from implementing energy ratings for AI models to reevaluating the necessity of AI for certain tasks, emphasizing the need for a comprehensive approach to tackle the energy challenges associated with AI.
When Love and the Algorithm Don’t Mix | TIME
What happened: The experience of online dating for black women is distinctive, often involving a complex interplay of being over-targeted for exoticism while simultaneously feeling like the least desired demographic. Questions arise about how dating app algorithms perceive and position blackness in matching. Additionally, concerns emerge regarding the prevalence of singlehood despite extensive time spent on dating apps, especially among people of color and those open to diverse relationships. The discussion also delves into how historical ideologies about race influence contemporary notions of compatibility and physical similarity in dating.
Why it matters: Extensive research and interviews highlight the emotional toll and racial trauma experienced by people of color engaging in online dating. Algorithms used in dating apps are suspected to contribute significantly to these challenges, perpetuating biases rooted in historical anti-interracial mingling ideologies. Moreover, patents filed by Match Group reveal the deliberate inclusion of physical traits, including ethnicity, in their matching algorithms, reflecting and potentially amplifying societal biases. Cultural and societal norms deeply influence perceptions of attractiveness and desirability, shaping individual preferences and reinforcing racialized beliefs about romantic partners.
Between the lines: Dating apps' algorithms reflect and perpetuate societal norms and biases, potentially exacerbating racial inequalities in the dating landscape. The tech industry's struggle to address racial injustice in algorithmic design is evident across various domains, including online dating, where outdated notions of racial sameness persist. Failure to acknowledge and rectify racially biased algorithms risks alienating diverse user bases and perpetuating harmful cultural norms around dating preferences. To improve the online dating experience, companies must design algorithms that accommodate a broader spectrum of preferences and challenge ingrained biases within technology and society.
Can AI porn be ethical? - The Guardian
What happened: The porn industry is integrating AI into girlfriend simulations, capitalizing on the popularity of AI technology. However, this development raises concerns about potential misuse, including the creation of pornographic deepfakes and AI-generated content depicting child sexual abuse. To mitigate these risks, some developers are implementing ethical guardrails within their AI romance apps, utilizing a combination of human moderators and AI tools to prevent abusive behavior and ensure user safety.
Why it matters: MyPeach.ai, a pioneering AI romance app, employs advanced moderation techniques to safeguard users from abusive interactions. The platform's commitment to ethical practices extends to hosting adult content creators who consent to the parameters of their AI replicas. Through a combination of technical tools, including explicit instructions to AI algorithms and human moderation, MyPeach.ai strives to create a safer environment for virtual intimacy, setting a new standard within the industry for responsible AI usage in adult content.
Between the lines: The emergence of ethical AI porn reflects broader shifts in the porn industry toward producing more inclusive and less exploitative content. However, unlike traditional porn, AI-generated content raises complex questions about consent, as AI entities lack consciousness and agency. While developers argue that AI entities are akin to sex toys programmed for specific interactions, concerns persist about the simulation of consensual relationships and the implications for healthy relationship dynamics. The ongoing debate underscores the need for continued ethical considerations and regulation as AI technology intersects with intimate human experiences.
📖 From our Living Dictionary:
👇 Learn more about why it matters in AI Ethics via our Living Dictionary.
🌐 From elsewhere on the web:
The mechanisms of AI hype and its planetary and social costs
A recently co-authored paper by Connor Wright, our partnerships manager.
Our global landscape of emerging technologies is increasingly affected by artificial intelligence (AI) hype, a phenomenon with significant large-scale consequences for the global AI narratives being created today.
This paper aims to dissect the phenomenon of AI hype in light of its core mechanisms, drawing comparisons between the current wave and historical episodes of AI hype, concluding that the current hype is historically unmatched in terms of magnitude, scale and planetary and social costs. We identify and discuss socio-technical mechanisms fueling AI hype, including anthropomorphism, the proliferation of self-proclaimed AI “experts”, the geopolitical and private sector “fear of missing out” trends and the overuse and misappropriation of the term “AI” in emerging technologies.
The second part of the paper seeks to highlight the often-overlooked costs of the current AI hype. We examine its planetary costs as the AI hype exerts tremendous pressure on finite resources and energy consumption. Additionally, we focus on the connection between AI hype and socio-economic injustices, including perpetuation of social inequalities by the huge associated redistribution of wealth and costs to human intelligence. In the conclusion, we offer insights into the implications for how to mitigate AI hype moving forward.
We give recommendations of how developers, regulators, deployers and the public can navigate the relationship between AI hype, innovation, investment and scientific exploration, while addressing critical societal and environmental challenges.
To delve deeper, read more details here.
💡 In case you missed it:
In this paper, the authors argue that algorithmic systems are intimately connected to and part of social and ecological systems. Similarly, there are growing parallels between work on algorithmic justice and the scholarship and practice within environmental and climate justice work. We provide examples and draw on learnings from social-ecological-technological systems analysis to propose a first-of-its-kind methodology for environmental justice-oriented algorithmic audits.
To delve deeper, read the full article here.
Take Action:
We’d love to hear from you, our readers, on what recent research papers caught your attention. We’re looking for ones that have been published in journals or as a part of conference proceedings.
This analysis of the NYC <a href="https://www.planethive.ai/services/secure-chatbots-rag"> Chatbot</a>
malfunction and its impact on labor division is both insightful and timely! It raises important questions about the reliability of AI in public services and the implications for workers. A critical read for anyone interested in the intersection of technology, labor, and governance.