AI Ethics Brief #123: Military AI, agents for social interactions and wellbeing, automated local news story generation, and more.
Will China overtake the US in AI development?
Welcome to another edition of the Montreal AI Ethics Institute’s weekly AI Ethics Brief that will help you keep up with the fast-changing world of AI Ethics! Every week we summarize the best of AI Ethics research and reporting, along with some commentary. More about us at montrealethics.ai/about.
Support our work through Substack
💖 To keep our content free for everyone, we ask those who can, to support us: become a paying subscriber for the price of a couple of ☕.
If you’d prefer to make a one-time donation, visit our donation page. We use this Wikipedia-style tipping model to support our mission of Democratizing AI Ethics Literacy and to ensure we can continue to serve our community.
This week’s overview:
🙋 Ask an AI Ethicist:
What are some of the best frameworks available (publicly) today that can help govern AI systems that are backed by LLMs?
✍️ What we’re thinking:
Building communities: FOSSY Day 3
How to invest in Data and AI companies responsibly
🤔 One question we’re pondering:
Are there any good resources that distill the system-level view of LLM-based AI systems that you are aware of?
🔬 Research summaries:
The Impact of Artificial Intelligence on Military Defence and Security
AI agents for facilitating social interactions and wellbeing
Unsolved Problems in ML Safety
📰 Article summaries:
News Corp using AI to produce 3,000 Australian local news stories a week - Guardian
Will China overtake the U.S. on AI? Probably not. Here’s why. - The Washington Post
AI tools spark anxiety among Philippines' call center workers
📖 Living Dictionary:
What is a diffusion model?
🌐 From elsewhere on the web:
These are the 3 biggest fears about AI and here's how worried you should be about them
💡 ICYMI
Justice in Misinformation Detection Systems
🤝 You can now refer your friends to The AI Ethics Brief!
Thank you for reading The AI Ethics Brief — your support allows us to keep doing this work. If you enjoy The AI Ethics Brief, it would mean the world to us if you invited friends to subscribe and read with us. If you refer friends, you will receive benefits that give you special access to The AI Ethics Brief.
How to participate
1. Share The AI Ethics Brief. When you use the referral link below, or the “Share” button on any post, you'll get credit for any new subscribers. Simply send the link in a text, email, or share it on social media with friends.
2. Earn benefits. When more friends use your referral link to subscribe (free or paid), you’ll receive special benefits.
Get a 3-month comp for 25 referrals
Get a 6-month comp for 75 referrals
Get a 12-month comp for 150 referrals
🤗 Thank you for helping get the word out about The AI Ethics Brief!
🙋 Ask an AI Ethicist:
Every week, we’ll feature a question from the MAIEI community and share our thinking here. We invite you to ask yours and we’ll answer it in the upcoming editions.
Here are the results from the previous edition for this segment:
Several readers wrote to us last week following the question above, asking, “What are some of the best frameworks available (publicly) today that can help govern AI systems that are backed by LLMs?”
At MAIEI, we appreciate the depth of the NIST AI RMF, and in particular, the specificity it offers to practitioners to implement controls and policies to better govern AI systems within different organizational contexts. When it comes to modern LLM-based AI systems, the following characteristics of the NIST AI RMF prove quite useful:
Provides a risk-based approach suited for complex AI systems
Supports ethical principles through the selection of appropriate controls
Enables ongoing governance through continuous monitoring
Allows customization to the organization's unique risks and objectives
In our work, we’ve found the last bullet to be critical to the success of Responsible AI program adoption. Overloaded program implementations ultimately lead to premature termination due to inefficiencies, cost overruns, and burnouts of staff that are tasked with putting the program in place.
What are some other frameworks that you use at your organizations to address AI ethics issues, especially those related to LLMs? Share your thoughts with the MAIEI community:
✍️ What we’re thinking:
Building communities: FOSSY Day 3
On the other side of collective intelligence (CI) lies conflict. Managing decentralized groups can be thankless, even brutal when users test the boundaries of a project’s code of conduct.
Sometimes, projects fall victim to their success. An unexpected cash windfall or user influx can pressure a collective’s social and technical infrastructure. Anyone trying to shepherd a decentralized, collaborative project might feel overwhelmed sometimes. But help is out there. “When I was a boy,” said Mr. Rogers, memorably, “and I would see scary things in the news, my mother would tell me, "Look for the helpers. You will always find people who are helping.” Today at FOSSY 2023, we’re considering designing inclusive, sustainable spaces to encourage the diverse contributions that drive CI.
To delve deeper, read the full article series here.
How to invest in Data and AI companies responsibly
People who invest in data and AI companies should care about AI ethics because it improves their financial performance and social impact. In this article, the author recommends that investors use the workflow below as part of their due diligence. In what follows, the author describes each step of this workflow, including presenting a sample scorecard and discussing potential next steps for investors.
To delve deeper, read the full article here.
🤔 One question we’re pondering:
Large gaps in public understanding of how large language model (LLM)-based AI systems are built and operated, not just the theoretical basis for LLMs, but a system-level understanding of all the components that enable something like ChatGPT to function as well as it does is hindering discussions on meaningful regulations and measures to address ethical issues. Are there any good resources that distill the system-level view of LLM-based AI systems that you are aware of?
We’d love to hear from you and share your thoughts back with everyone in the next edition:
🔬 Research summaries:
The Impact of Artificial Intelligence on Military Defence and Security
Today’s world order is being heavily influenced by emerging and disruptive technologies, creating an urgency for international cooperation to ensure peace and security amid such rapid change. This paper explores the role of technology, specifically artificial intelligence (AI), in defense and security and potential opportunities for multilateral engagement and governance to guide its responsible development and use.
To delve deeper, read the full summary here.
AI agents for facilitating social interactions and wellbeing
AI agents are prevalent in serving human beings, but their current applications are relatively limited to individual subjects. In a relatively unexplored region, AI agents mediate social interactions leading to well-being. This paper touches upon the possibilities of AI agents for group well-being as social mediators and the social impact where another ethical issue may emerge.
To delve deeper, read the full summary here.
Unsolved Problems in ML Safety
As ML systems become more capable and integrated into society, the safety of such systems becomes increasingly important. This paper presents four broad areas in ML Safety: Robustness, Monitoring, Alignment, and Systemic Safety. We explore each area’s motivations and provide concrete research directions.
To delve deeper, read the full summary here.
📰 Article summaries:
News Corp using AI to produce 3,000 Australian local news stories a week - Guardian
What happened: News Corp Australia is utilizing generative AI to produce around 3,000 articles per week. A team of four staff members at Data Local, led by News Corp's data journalism editor Peter Judd, uses AI to generate local stories on topics such as weather, fuel prices, and traffic conditions. The generated articles often carry Judd's byline, and the content primarily provides service information like fuel prices, court lists, traffic, weather, and death and funeral notices.
Why it matters: The move to produce thousands of AI-generated articles comes as local news is crucial in attracting new subscribers, with many staying for national, world news, and lifestyle content. Hyperlocal mastheads have been driving 55% of all News Corp's subscriptions, leading the company to launch several digital-only local titles to cater to such interests. This shift towards AI-generated content and local digital publications signifies an industry-wide exploration of AI's potential in enhancing content accessibility and personalization.
Between the lines: The adoption of AI-generated content is a trend seen across newsrooms in Australia as media outlets explore various AI applications to improve content accessibility and recommendation systems. While News Corp embraces AI for local content, other media organizations like ABC focus on AI applications that could enhance content accessibility through transcription, text-to-speech delivery, translation, and personalization. However, integrating AI in newsrooms also raises questions about the impact on traditional journalism jobs and the need for ethical considerations in AI-generated content.
Will China overtake the U.S. on AI? Probably not. Here’s why. - The Washington Post
What happened: The competition between the United States and China in the realm of AI has intensified. While China's AI development lags behind its Western counterparts in many areas, it has taken the lead in implementing regulations on the AI industry. Chinese authorities have been particularly proactive in regulating AI uses that allow the public to create content, leading to compliance challenges for Chinese companies, and some have implemented their own rules.
Why it matters: Chinese companies have invested heavily in AI for various applications, particularly surveillance technology, giving them an edge in this area. However, they have fallen behind in other types of AI due, in part, to strict government control over information and communication. This has led Chinese companies to follow the trajectory set by US companies in certain areas of AI development, such as large language models (LLMs). Despite these challenges, Chinese AI companies have made significant strides in facial recognition and virtual reality technologies, which have fueled China's surveillance industry and enabled tech giants to expand their influence globally.
Between the lines: Beijing's regulatory approach to AI has constrained Chinese firms' innovation and brought attention to key principles that Washington can learn from. While compliance with regulations has burdened Chinese AI companies, the regulations have focused on important aspects like protecting personal information, labeling AI-generated content, and addressing dangerous AI capabilities. In the US, there is a need for AI regulation that strikes a balance between preventing discrimination, protecting individuals' rights, and adhering to existing laws, without resorting to a heavy-handed approach.
AI tools spark anxiety among Philippines' call center workers
What happened: The Philippines, known for its large outsourcing industry with around 1.6 million workers, is facing the rapid emergence of generative AI, posing a threat to these jobs. Some workers have already started using AI tools like ChatGPT and Bing in the background to handle customer queries more efficiently, but their employers often forbid such practices. The country's leaders are grappling with balancing adopting generative AI to stay competitive and the risk of massive job losses in the outsourcing sector, which accounts for a significant portion of the Philippine economy.
Why it matters: The Philippine outsourcing industry, responsible for business process outsourcing (BPO) on a massive scale, is under pressure due to the rise of generative AI. With millions of jobs at stake, there is a crucial need to manage the technological shift to prevent significant job losses and retain clients who may prefer cheaper AI-powered services from other outsourcing centers. The country's political and business leaders are at odds over how to address this challenge, and the threat of AI-induced unemployment looms large, prompting calls for proactive action and collaboration to navigate the impending "technological tsunami."
Between the lines: While some argue that adopting AI could lead to the displacement of human workers, others believe that it could result in the creation of new opportunities. The transition to AI is seen as a survival necessity for Filipino workers in the competitive outsourcing industry, and it's emphasized that failing to embrace AI could lead to losing business to rivals who do. The decision to incorporate AI into the sector is unavoidable, even though it may lead to job losses. Adapting and finding ways to use AI while mitigating its impact on the workforce is imperative.
📖 From our Living Dictionary:
👇 Learn more about why it matters in AI Ethics via our Living Dictionary.
🌐 From elsewhere on the web:
These are the 3 biggest fears about AI and here's how worried you should be about them
There's a growing consensus that AI is a threat to some jobs.
Abhishek Gupta, founder of the Montreal AI Ethics Institute, said the prospect of AI-induced job losses was the most "realistic, immediate, and perhaps pressing" existential threat.
"We need to look at the lack of purpose that people would feel at the loss of jobs en masse," he told Insider. "The existential part of it is what are people going to do and where are they going get their purpose from?"
"That is not to say that work is everything, but it is quite a bit of our lives," he added.
CEOs are starting to be upfront about their plans to leverage AI. IBM CEO Arvind Krishna, for example, recently announced the company would slow hiring for roles that could be replaced with AI.
"Four or five years ago, nobody would have said anything like that statement and be taken seriously," Gupta said of IBM.
To delve deeper, read the full article here.
💡 In case you missed it:
Justice in Misinformation Detection Systems
Despite their adoption on several global social media platforms, the ethical and societal risks associated with algorithmic misinformation detection are poorly understood. In this paper, we consider the key stakeholders that are implicated in and affected by misinformation detection systems. We use and expand upon the theoretical framework of informational justice to explain issues of justice pertinent to these stakeholders within the misinformation detection pipeline.
To delve deeper, read the full article here.
Take Action:
We’d love to hear from you, our readers, on what recent research papers caught your attention. We’re looking for ones that have been published in journals or as a part of conference proceedings.