Discover more from The AI Ethics Brief
AI Ethics Brief #146: LLMs threatening digital public goods, fair and open-market access, learning to prompt in the classroom, meaningful public participation, and more.
How will the differing regulatory landscape and outlook towards market competitiveness will differentially shape the European vs. the American AI ecosystems?
Welcome to another edition of the Montreal AI Ethics Institute’s weekly AI Ethics Brief that will help you keep up with the fast-changing world of AI Ethics! Every week, we summarize the best of AI Ethics research and reporting, along with some commentary. More about us at montrealethics.ai/about.
Support our work through Substack
💖 To keep our content free for everyone, we ask those who can, to support us: become a paying subscriber for the price of a couple of ☕.
If you’d prefer to make a one-time donation, visit our donation page. We use this Wikipedia-style tipping model to support our mission of Democratizing AI Ethics Literacy and to ensure we can continue to serve our community.
This week’s overview:
🚨 Quick take on last week in Responsible AI:
Microsoft and Mistral AI
🙋 Ask an AI Ethicist:
How to achieve the right competitive balance in an AI ecosystem between large tech companies and startups?
✍️ What we’re thinking:
Asking Better Questions -- The Art and Science of Forecasting: A mechanism for truer answers to high-stakes questions
🤔 One question we’re pondering:
How will the differing regulatory landscape and outlook towards market competitiveness will differentially shape the European vs. the American AI ecosystems?
🪛 AI Ethics Praxis: From Rhetoric to Reality
A fair and open-market access approach to maintaining the competitive posture of the AI ecosystem
🔬 Research summaries:
Are Large Language Models a Threat to Digital Public Goods? Evidence from Activity on Stack Overflow
Learning to Prompt in the Classroom to Understand AI Limits: A pilot study
Implementing Responsible AI: Tensions and Trade-Offs Between Ethics Aspects
📰 Article summaries:
The FCC wants to criminalize AI robocall spam
Meaningful public participation and AI
Sora: OpenAI launches tool that instantly creates video from text
📖 Living Dictionary:
What is the relevance of AI companions to AI ethics?
🌐 From elsewhere on the web:
Bring Human Values to AI
💡 ICYMI
AI supply chains make it easy to disavow ethical accountability
🚨 Microsoft and Mistral AI - here’s our quick take on what happened last week.
What happened: There are concerns about Microsoft's recent partnership with Mistral AI. This partnership includes a €15 million ($16.3 million) investment from Microsoft, which will convert to equity in Mistral's next funding round. The deal aims to provide Mistral's engineers with access to Microsoft's Azure supercomputing infrastructure and offer Mistral's large language models on Microsoft's cloud platform. However, the EC is scrutinizing this investment, particularly in the context of agreements between large digital market players and generative AI developers. The investigation is part of a broader effort to ensure that such partnerships do not stifle competition within the rapidly evolving AI market.
Why it matters: These investigations are part of a larger trend of regulatory interest in the ethical and competitive impacts of large investments in AI. For instance, the UK's Competition and Markets Authority (CMA) and the U.S. Federal Trade Commission (FTC) are also reviewing Microsoft's investment in OpenAI, another significant player in the AI space. Additionally, concerns have been raised about the Artificial Intelligence Act (AIA) in Europe, which some believe could reduce the competitiveness of the European AI ecosystem by imposing additional costs and slowing down innovation.
Between the lines: Microsoft has responded to these challenges by articulating its AI Access Principles, which aim to promote innovation and competition in the AI economy. These principles include commitments to broad technology access, responsible AI adoption, cybersecurity protection, and engagement with regulators to address concerns.
Did we miss anything?
Sign up for the Responsible AI Bulletin, a bite-sized digest delivered via LinkedIn for those who are looking for research in Responsible AI that catches our attention at the Montreal AI Ethics Institute.
🙋 Ask an AI Ethicist:
Every week, we’ll feature a question from the MAIEI community and share our thinking here. We invite you to ask yours, and we’ll answer it in the upcoming editions.
Here are the results from the previous edition for this segment:
Couldn’t agree more! Respect, for the lived experiences and value that community stakeholders bring to the design, development, and deployment process of any technology, not just AI, is an important, if not the most important facet of getting stakeholder engagement right. It encourages participation from the stakeholders, showing them that their time and efforts are valuable, at the same time, it is an indication to the rest of the ecosystem of stakeholders, e.g., customers, that this is a process that is integral to the product design and development efforts of the organization and isn’t just done as a form of tokenism.
Coming to the question for this week, reader A.K.M. asks us about how to achieve the right competitive balance between investments from large corporations in AI and smaller upstarts who are trying to bring advances from the outside via open-source approaches so that the overall ecosystem moves towards a more responsible posture.
Wow, this is a great question that fostered some deep discussions within the institute staff, encouraging us to reach out to some of our economist friends as well. Here’s what we think is a meaningful approach to achieve some sort of “balance” in terms of competitiveness in the AI ecosystem. Borrowing from well-established and researched economic literature, there are three key components to addressing this issue: (1) Regulatory oversight, (2) Support for startups and innovation, and (3) Fair and open-market access.
1. Regulatory Oversight
Enforce Antitrust Laws: Governments should rigorously enforce antitrust laws to prevent large companies from engaging in anti-competitive practices, such as predatory pricing or unfair market exclusions, ensuring a level playing field for startups.
Promote Open Standards: Encourage the development and use of open standards and interoperability among different technologies and platforms, reducing barriers to entry for startups and preventing lock-in by dominant firms.
2. Support for Startups and Innovation
Financial Incentives: Offer tax breaks, grants, and investment funding for startups, particularly those pursuing innovative solutions in areas underserved by large companies, to ease financial pressures and foster growth.
Access to Resources: Provide startups with access to essential resources, such as research facilities, business mentorship, and networking opportunities, enabling them to develop and scale their innovations more effectively.
3. Fair and Open-Market Access
Transparent Marketplaces: Establish and maintain transparent digital marketplaces and platforms where businesses of all sizes can compete on equal footing, ensuring visibility for startup products and services.
Procurement Policies: Implement government procurement policies that favor innovation and diversity in sourcing, giving startups fair opportunities to compete for contracts alongside established companies.
What are some approaches that you think will work best to keep the AI ecosystem competitive with a diversity of contributors? Please let us know! Share your thoughts with the MAIEI community:
✍️ What we’re thinking:
Without the ability to estimate and benchmark AI capability advancements, organizations are left to respond to each change reactively, impeding their ability to build viable mid and long-term strategies. This paper explores the recent growth of forecasting, a political science tool that uses explicit assumptions and quantitative estimation that leads to improved prediction accuracy. Done at the collective level, forecasting can identify and verify talent, enable leaders to build better models of AI advancements and improve inputs into design policy. Successful approaches to forecasting and case studies are examined, revealing a subclass of "superforecasters" who outperform 98% of the population and whose insights will be most reliable. Finally, techniques behind successful forecasting are outlined, including Phillip Tetlock's "Ten Commandments." To adapt to a quickly changing technology landscape, designers and policymakers should consider forecasting as a first line of defense.
To delve deeper, read the full article here.
🤔 One question we’re pondering:
Building on this week’s reader question, we’ve been thinking more about how the differing regulatory landscape and outlook towards market competitiveness will differentially shape the European vs. the American AI ecosystems, and if it is even worth considering them as separate given the strong interconnections between companies and talent across the pond?
We’d love to hear from you and share your thoughts with everyone in the next edition:
🪛 AI Ethics Praxis: From Rhetoric to Reality
Given the very interesting reader question this week, we wanted to dive deeper into the idea of Fair and open-market access as an approach to maintaining the competitive posture of the AI ecosystem. In particular, from the work we’ve done with several national governments in shaping their national AI policies, we’re happy to share some of our key ideas below:
Transparent Marketplaces
Standardization of Disclosure Requirements
Develop AI Transparency Standards: Establish standards for AI companies to disclose information about the datasets used, the design choices, and the limitations of their AI systems, promoting an understanding of how AI products work and their potential biases.
Implement a Certification System: Create a certification system for AI products that meet these transparency standards, reassuring consumers and businesses about the ethical considerations and quality of the AI systems they are purchasing or using.
Encouragement of Open Data and Algorithms
Promote Open Data Initiatives: Support the sharing of high-quality, anonymized datasets to fuel AI innovation, especially in areas with significant social impact, ensuring startups have the data necessary to train competitive AI models.
Foster Open-Source AI Solutions: Encourage the development and use of open-source AI algorithms and tools, facilitating access for startups to state-of-the-art technologies without the high costs of proprietary solutions.
Consumer Education and Awareness Programs
Launch AI Literacy Campaigns: Implement educational programs aimed at increasing the AI literacy of consumers, enabling them to make informed choices about AI-enabled products and services.
Provide Comparative AI Product Platforms: Develop online platforms that offer comparative information on AI products and services, including performance benchmarks, ethical compliance ratings, and user reviews, aiding consumers in navigating the AI marketplace.
Procurement Policies
Innovation-Friendly Procurement Strategies
Set Aside Contracts for Startups: Allocate a certain percentage of government AI contracts for startups and SMEs, encouraging innovation and giving newer companies a fair chance to demonstrate their solutions.
Adopt Agile Procurement Processes: Simplify procurement processes for AI technologies, adopting more flexible and agile methodologies that can adapt to the rapid pace of AI innovation and accommodate smaller vendors.
Evaluation Criteria Emphasizing Ethical AI
Incorporate Ethical AI Criteria in Tenders: Define clear criteria for ethical AI practices within procurement tenders, including requirements for transparency, fairness, and accountability in AI systems, prioritizing vendors who adhere to these principles.
Reward Responsible Innovation: Offer incentives for companies that demonstrate responsible AI innovation, such as ethical use of data, inclusivity in design, and efforts to mitigate bias, through higher scores in procurement evaluations or financial incentives.
Partnerships and Collaborative Projects
Foster Public-Private Partnerships: Encourage collaborations between government agencies, academic institutions, and private sector startups focused on developing AI solutions for public good, sharing risks, and rewards.
Support Collaborative R&D Projects: Provide funding and support for collaborative research and development projects in AI between startups, larger companies, and research institutions, promoting the exchange of ideas and fostering a more diverse AI ecosystem.
You can either click the “Leave a comment” button below or send us an email! We’ll feature the best response next week in this section.
🔬 Research summaries:
Are Large Language Models a Threat to Digital Public Goods? Evidence from Activity on Stack Overflow
The widespread adoption of large language models can substitute for public knowledge sharing in online communities. This paper finds that the release of ChatGPT led to a significant decrease in content creation on Stack Overflow, the biggest question-and-answer (Q&A) community for computer programming. We argue that this increasingly displaced content is an important public good, which provides essential information for learners, both human and artificial.
To delve deeper, read the full summary here.
Learning to Prompt in the Classroom to Understand AI Limits: A pilot study
To be able to fully exploit the potential of Large Language Models (LLMs), it is crucial to acknowledge their fallibility and limitations. This makes a critical approach to their output possible and helps reduce fear and negative attitudes that may impair the societal benefits of LLMs and AI. A pilot educational intervention with high school students involving hands-on non-trivial interactions with ChatGPT showed promising results, including improved interaction, decreased negativity, and increased understanding.
To delve deeper, read the full summary here.
Implementing Responsible AI: Tensions and Trade-Offs Between Ethics Aspects
Many ethical principles for responsible AI have been proposed to allay concerns about the misuse and abuse of AI/ML systems, employing aspects such as privacy, accuracy, fairness, robustness, explainability, and transparency. However, tensions between these aspects pose difficulties for AI/ML developers seeking to follow these principles. As part of the ongoing effort to operationalize the principles into practice, we compile and discuss a catalog of 10 notable tensions, trade-offs, and other interactions between the underlying aspects in this paper. This catalog can help raise awareness of the possible interactions between aspects of ethics principles and facilitate well-supported judgments by the designers and developers of AI/ML systems.
To delve deeper, read the full summary here.
📰 Article summaries:
The FCC wants to criminalize AI robocall spam
What happened: The Federal Communications Commission (FCC) has declared its intention to outlaw AI-driven robocalls due to a surge in scams utilizing voice-cloning technology. Jessica Rosenworcel, head of the FCC, highlighted the risk of machine-learning software persuading individuals to take fraudulent actions, such as donating to fake causes. The concern extends to computers mimicking celebrities and others to execute large-scale spam calls with convincing bait, exemplified by a recent incident in New Hampshire where residents received a fake call mimicking US President Biden's voice.
Why it matters: Rosenworcel emphasized the escalating confusion caused by AI-generated voice cloning and images, which deceive consumers into believing scams and frauds are genuine. The FCC aims to classify this emerging technology as illegal under current law, granting State Attorneys General offices new enforcement tools to combat scams and safeguard consumers. Proposals to amend the 1991 Telephone Consumer Protection Act (TCPA) seek to criminalize AI voice cloning in robocall scams, aligning regulations for AI-generated robocalls with human-generated calls. Government officials, including Pennsylvania's Attorney General Michelle Henry and a bipartisan coalition of 25 Attorneys General, support the FCC's initiative.
Between the lines: Henry stressed the importance of preventing technology from being exploited to deceive or manipulate consumers, advocating for AI-generated voices to be classified as artificial voices under existing regulations. The bipartisan coalition underscores the urgency of addressing the potential harm posed by AI advancements in exacerbating the already prevalent issue of robocalls and text communications.
Meaningful public participation and AI | Ada Lovelace Institute
What happened: AI technologies are increasingly integrated into daily life, impacting areas like loan approvals, job hiring, and healthcare diagnoses. However, decision-making processes regarding AI often lack input from affected individuals, neglecting discussions on potential benefits and risks. Incorporating diverse perspectives ensures AI use aligns with societal values and needs, promoting justice, legitimacy, and accountability. As governments and organizations work to regulate AI, understanding effective public engagement becomes essential.
Why it matters: Recent events, such as the UK's Global Summit on AI Safety, underscore the dominance of the private sector in AI discourse, sidelining the perspectives of civil society. Calls for inclusive decision-making highlight the necessity of diverse public voices, stressing genuine deliberation and accountability. Concerns persist about the undemocratic nature of AI governance, emphasizing the importance of meaningful public involvement, particularly from marginalized groups. Complex AI-related issues demand comprehensive public participation to safeguard civil and human rights and address emerging ethical challenges.
Between the lines: A wealth of evidence suggests various methods for meaningful public engagement in policy-making, drawing from deliberative democracy, social movements, and civil society initiatives. Examples like the OECD's framework and models from Belgium, Paris, and Bogota showcase diverse approaches to continuous participation in decision-making. Considering global interdependencies in AI's impact highlights the need for varied engagement examples from different contexts, ensuring a comprehensive understanding of public involvement worldwide.
Sora: OpenAI launches tool that instantly creates video from text
What happened: OpenAI unveiled Sora, a tool capable of generating videos based on text prompts. Dubbed after the Japanese word for "sky," Sora creates realistic videos up to a minute long, following user instructions on subject matter and style. The model can also generate videos from still images or extend existing footage. OpenAI aims to train AI models that simulate the physical world to aid problem-solving requiring real-world interaction.
Why it matters: Access to Sora has been granted to selected researchers and video creators for testing, ensuring compliance with OpenAI's terms of service. Limited access aims to prevent misuse, with CEO Sam Altman sharing video clips on Twitter (X) bearing AI-generated watermarks. OpenAI's previous releases, like Dall-E and ChatGPT, gained widespread adoption, highlighting the significance of advanced AI tools. Despite other companies developing similar video generation tools, OpenAI's approach stands out for its capability and controlled release.
Between the lines: OpenAI's training methods for Sora remain undisclosed, raising questions about the origin and legality of training data. Past lawsuits alleging copyright infringement during AI training underline ethical concerns surrounding data usage. The company's reliance on publicly available and licensed videos underscores the complexity of copyright issues in AI development, particularly with models trained on extensive internet datasets.
📖 From our Living Dictionary:
What is the relevance of AI companions to AI ethics?
👇 Learn more about why it matters in AI Ethics via our Living Dictionary.
🌐 From elsewhere on the web:
When it launched GPT-4, in March 2023, OpenAI touted its superiority to its already impressive predecessor, saying the new version was better in terms of accuracy, reasoning ability, and test scores—all of which are AI-performance metrics that have been used for some time. However, most striking was OpenAI’s characterization of GPT-4 as “more aligned”—perhaps the first time that an AI product or service has been marketed in terms of its alignment with human values.
In this article a team of five experts offer a framework for thinking through the development challenges of creating AI-enabled products and services that are safe to use and robustly aligned with generally accepted and company-specific values. The challenges fall into five categories, corresponding to the key stages in a typical innovation process from design to development, deployment, and usage monitoring. For each set of challenges, the authors present an overview of the frameworks, practices, and tools that executives can leverage to face those challenges.
To delve deeper, read more details here.
💡 In case you missed it:
AI supply chains make it easy to disavow ethical accountability
AI system components are often produced in different organizational contexts. For example, we might not know how upstream datasets were collected or how downstream users may use our system. In our recent article, Dawn Nafus and I show how software supply chains make AI ethics work hard and how they lead people to disavow accountability for ethical questions that require scrutiny of upstream components or downstream use.
To delve deeper, read the full article here.
Take Action:
We’d love to hear from you, our readers, on what recent research papers caught your attention. We’re looking for ones that have been published in journals or as a part of conference proceedings.