AI Ethics Brief #74: Challenges to AI development in Vietnam, Good AI Society, revolution in warfare, and more ...
What happens if we're able to clone our voice using AI?
Welcome to another edition of the Montreal AI Ethics Institute’s weekly AI Ethics Brief that will help you keep up with the fast-changing world of AI Ethics! Every week we summarize the best of AI Ethics research and reporting, along with some commentary. More about us at montrealethics.ai/about.
⏰ This week’s Brief is a ~18-minute read.
Support our work through Substack
💖 To keep our content free for everyone, we ask those who can, to support us: become a paying subscriber for as little as $5/month. If you’d prefer to make a one-time donation, visit our donation page. We use this Wikipedia-style tipping model to support the democratization of AI ethics literacy and to ensure we can continue to serve our community.
*NOTE: When you hit the subscribe button, you may end up on the Substack page where if you’re not already signed into Substack, you’ll be asked to enter your email address again. Please do so and you’ll be directed to the page where you can purchase the subscription.
This week’s overview:
✍️ What we’re thinking:
Challenges of AI Development in Vietnam: Funding, Talent and Ethics
📅 Event summaries:
Our Top-5 takeaways from our meetup “Protecting the Ecosystem: AI, Data and Algorithms”
🔬 Research summaries:
Achieving a ‘Good AI Society’: Comparing the Aims and Progress of the EU and the US
📰 Article summaries:
The Third Revolution in Warfare
Everyone will be able to clone their voice in the future
Troll farms reached 140 million Americans a month on Facebook before 2020 election, internal report shows
How new regulation is driving the AI governance market
📖 Living Dictionary:
Proxy variables
🌐 From elsewhere on the web:
The Imperative for Sustainable AI Systems
These leading activists are promoting ethical AI
💡 ICYMI
Toward Fairness in AI for People with Disabilities: A Research Roadmap
But first, our call-to-action this week:
This is a labor of the Learning Community cohort that was convened by MAIEI in Winter 2021 to work through and discuss important research issues in the field of AI ethics from a multidisciplinary lens. The community came together supported by facilitators from the MAIEI staff to vigorously debate and explore the nuances of issues like bias, privacy, disinformation, accountability, and more especially examining them from the perspective of industry, civil society, academia, and government.
The outcome of these discussions is reflected in the report that you are reading now – an exploration of a variety of issues with deep-dive, critical commentary on what has been done, what worked and what didn’t, and what remains to be done so that we can meaningfully move forward in addressing the societal challenges posed by the deployment of AI systems.
The chapters titled
“Design and Techno-isolationism”,
“Facebook and the Digital Divide: Perspectives from Myanmar, Mexico, and India”,
“Future of Work”, and
“Media & Communications & Ethical Foresight”
will hopefully provide with you novel lenses to explore this domain beyond the usual tropes that are covered in the domain of AI ethics.
✍️ What we’re thinking:
AI and Sociology
Challenges of AI Development in Vietnam: Funding, Talent and Ethics
Vietnam in 2020 overtook Singapore’s gross domestic product (GDP), and became the third largest economy in ASEAN, the Association of Southeast Asian Nations. Immediately after the new national leadership was elected at the Communist Party of Vietnam’s Congress in January 2021, President Nguyen Xuan Phuc signed an important document entitled National Strategy on R&D and Application of Artificial Intelligence, or the Strategy Document. The 14-page document outlines plans and initiatives for Vietnam to “promote research, development and application of AI, making it an important technology of Vietnam in the Fourth Industrial Revolution.” Vietnam aims to become “a center for innovation, development of AI solutions and applications in ASEAN and over the world” by 2030.
With ambitious goals, the strategy document provides some directions to where Vietnam should go in the next decade. It shows that it follows China’s and other Asian countries’ footsteps in becoming a techno-developmental state which takes advantage of technological changes for economic developments. While outlining what 16 ministries and the Vietnam Academy of Science and Technology need to do in the next 10 years, the document does not show how other players such as startup founders, civil society, and beneficiaries of AI, common users in Vietnam’s AI economy should do. It also has no mention of the role of AI ethics in this development. Without any consideration to important ethical issues such as privacy and surveillance, bias and discrimination, and the role of human judgment, AI development in the country might only benefit a small group of people, and possibly bring harms to others.
In this op-ed we examine three key issues regarding AI development that any country would have to tackle when joining the AI global race: Funding, Talent and Ethics.
To delve deeper, read the full article here.
🔬 Research summaries:
Achieving a ‘Good AI Society’: Comparing the Aims and Progress of the EU and the US
Governments around the world are formulating different strategies to tackle the risks and the benefits of AI technologies. These strategies reflect the normative commitments highlighted in high-level documents such as the EU High-Level Expert Group on AI and the IEEE, among others. The paper “Achieving a ‘Good AI Society’: Comparing the Aims and Progress of the EU and the US compares strategies and progress made in the EU vs the US. The paper concludes by highlighting areas where improvement is still needed to reach a “Good AI Society”.re “autonomous, interactive, and adaptive”.
To delve deeper, read the full summary here.
📰 Article summaries:
The Third Revolution in Warfare
What happened: On the 20th anniversary of the 9/11 attacks, there has been a lot of reflection on warfare and terrorism. With AI pushing into all facets of our lives, it is natural to examine where we will end up with AI-enabled weapons systems. In this article, author and VC Kai-Fu Lee talks about some of the challenges, technical and moral, in the use of autonomous weapons systems. In particular, he highlights the clear moral dilemmas that arise when we don’t have clear chains of accountability and a lack of transparency in terms of how the systems operate. He also points to potential solutions ranging from protocols of engagement to outright bans each of which have a different likelihood of success. There are some potential benefits in the use of AI in warfare, notably the potential to save lives and reduce collateral damage, but that comes at a cost.
Why it matters: The current state of the ecosystem is that we have an arms-race atmosphere where it appears that AI-enabled weapons are inevitable and countries are rushing to try out the technology to ensure that they don’t get left behind. The article mentions the Harpy drone from Israel as an example. Of course, some hypothetical scenarios, like the Slaughterbots from a fictional short-film, point to a possible future where such capabilities are in the hands of malicious actors who don’t need a lot of resources to execute fairly sophisticated and damaging attacks.
Between the lines: Ultimately, the biggest disruption that will arise from the use of AI is the degree of leverage it will create for non-state and small actors to utilize this technology, often using open-source designs and software, with cheap off-the-shelf hardware to assemble and deploy weapons that can wreak havoc, at least at a moderate scale, harming people and making it difficult to deter such attacks because of the nimbleness of such systems. At the moment, I don’t believe that non-state and low-resourced actors will be able to use such systems to rival large militaries but it definitely gives them a leg up in small-scale combat due to lowering the costs and collateral that they have to put up to engage.
Everyone will be able to clone their voice in the future
What happened: The ability to clone voices has existed for some time now but the new crop of tools are faster, easier, and more realistic to boot. A simple web search yields results pointing to companies like Respeecher, Resemble.ai, Veritone, and Descript all of whom have product offerings that can create a clone of your voice that can be used for various purposes. One of the promising avenues advertised by firms like Veritone is that it allows creative talent, like influencers, to scale their impact by “loaning” out their likeness to advertisers without them needing to be present. The article points out though that the results still have a weird warble and lack the ability to charge the generated voice with emotion and intonation that a real actor can bring but the results are definitely realistic enough to be spooky.
Why it matters: The recent debacle with cloning Anthony Bourdain’s voice showed that even potentially positive uses of such technologies can have an uncanny valley effect. In other cases, like the one where this technology was used to revive the voice of Val Kilmer who suffered from voice loss due to a tracheotomy, the results were perceived in a much more positive light. The technology can definitely be put to a positive use but this requires a careful consideration of pros and cons, as is the case with all dual-use technology.
Between the lines: Some interesting applications mentioned in the article include how voice clones could be used to make games more personalized by adding in the player’s voice clone to deliver all the dialogues in-game from the protagonist making the game more immersive. Another one utilizing parents’ voice clones to read bedtime stories to children when parents are away. As long as we can prevent stealing the likeness of our voices which can be used for automating fraud, such applications definitely have the potential to bring about some useful capabilities.
What happened: In a perhaps not so shocking report, a former senior-level data scientist revealed that troll farms continue to have significant audiences who are deeply engaged on Facebook. The report highlighted 3 key shortcomings in the existing platform design that allowed pages run by these troll farms, that have never engaged with, nor have knowledge of the communities that they influence, to shape their thoughts. Facebook doesn’t penalize pages that post unoriginal content, allowing previously viral content to merely be copied and go viral again, perpetuating disinformation. Engaging content from pages that users don’t even follow can still show in their feeds when a friend interacts with that piece of content. And finally, more engaging content is pushed up higher in the newsfeed no matter what the type of content or source. This incentivizes politically divisive and clickbait content to rise to the top.
Why it matters: In an ecosystem where a large number of people get their news updates from social media platforms rather than traditional media outlets, the combination of the above three forces entails a significant problem in our ability to maintain a healthy information ecosystem. More so, with a blatant disregard for the type of content and merely utilizing its engagement rates to disburse it, the platform specifically encourages the worst kind of behaviour that troll farms in places like Kosovo and Macedonia are able to leverage for financial rather than political gains.
Between the lines: The report also provides some suggestions on how we can combat this scourge: using something called Graph Authority, one can get an understanding of how authentic and relevant a piece of content is based on the number of reputable in and out links, something that Google has done for several years already. Yet, as per the report, such efforts within Facebook have largely been ignored and it continues to prioritize content that has the highest likelihood of engagement driving usage on the platform rather than the quality of the content itself.
How new regulation is driving the AI governance market
What happened: The article highlights the trend in the current market towards a greater adoption of AI governance solutions, frameworks and tools, that will multiply the market value of such solutions to almost 10x the current amount over the next 6 years. This is being driven by incoming regulations, mostly from Europe with burgeoning efforts in the US, combined with increasing consumer savvy around data privacy and other harms like biases in algorithmic systems as they make purchase and use decisions for the various products and services around them.
Why it matters: As highlighted in a report from the Berkeley Center for Long-Term Cybersecurity, AI governance has undergone 3 major stages since 2016: development of high-level principles, consensus on those principle sets, and translating principles into practice. The trend observed here in terms of market value is just a natural extension of the final stage where a demand for solutions that can materialize the principles will be sought by organizations.
Between the lines: I believe that there is another era that we’re entering with this trend which is going to involve immature solutions that claim to solve AI governance problems proliferating the market (the current phase), followed by a culling of players who aren’t able to deliver on lofty promises, and finally an establishment of more mature companies that will cement their positions in various niches of the AI governance landscape selling more battle-tested solutions.
📅 Event summaries:
Our Top-5 takeaways from our meetup “Protecting the Ecosystem: AI, Data and Algorithms”
In our meetup with AI Policy Labs, we discussed AI’s involvement with climate change. From the need for corporate buy-in to data centres, AI’s role in the fight can often be confused. However, it starts with what is factual that will give us the best chance of using it.
To delve deeper, read the full article here.
From our Living Dictionary:
‘Proxy variables’
👇 Learn more about why it matters in AI Ethics via our Living Dictionary.
From elsewhere on the web:
The Imperative for Sustainable AI Systems
AI systems have a massive carbon footprint. This carbon footprint also has consequences in terms of social justice as we will explore in this article. Here, we use sustainability to talk about not just environmental impact, but also social justice implications and impacts on society. Though an important area, we don’t use the term sustainable AI here to mean applying AI to solve environmental issues. Instead, a critical examination of the impacts of AI on the physical and social environment is the focus of our discussion.
To delve deeper, read the full article here.
These leading activists are promoting ethical AI
Our founder, Abhishek Gupta, was featured in this article highlighting the work of some of the top activists in the world who are leading the charge for incorporating ethical AI practices in organizations.
To delve deeper, read the full article here.
In case you missed it:
Toward Fairness in AI for People with Disabilities: A Research Roadmap
In this position paper, the authors identify potential areas where Artificial Intelligence (AI) may impact people with disabilities (PWD). Although AI can be extremely beneficial to these populations (the paper provides several examples of such benefits), there is a risk of these systems not working properly for PWD or even discriminating against them. This paper is an effort towards identifying how inclusion issues for PWD may impact AI, which is only a part of the authors’ broader research agenda.
To delve deeper, read the full summary here.
Take Action:
The MAIEI Learning Community cohort report is live now!
This is a labor of the Learning Community cohort that was convened by MAIEI in Winter 2021 to work through and discuss important research issues in the field of AI ethics from a multidisciplinary lens. The community came together supported by facilitators from the MAIEI staff to vigorously debate and explore the nuances of issues like bias, privacy, disinformation, accountability, and more especially examining them from the perspective of industry, civil society, academia, and government.
The outcome of these discussions is reflected in the report that you are reading now – an exploration of a variety of issues with deep-dive, critical commentary on what has been done, what worked and what didn’t, and what remains to be done so that we can meaningfully move forward in addressing the societal challenges posed by the deployment of AI systems.
The chapters titled
“Design and Techno-isolationism”,
“Facebook and the Digital Divide: Perspectives from Myanmar, Mexico, and India”,
“Future of Work”, and
“Media & Communications & Ethical Foresight”
will hopefully provide with you novel lenses to explore this domain beyond the usual tropes that are covered in the domain of AI ethics.