AI Ethics Brief #91: Decolonizing AI, GDPR, Google's FLoC, responsibility assignment and moral issues, and more ...
Could there be good news about the carbon footprint of machine learning training?
Welcome to another edition of the Montreal AI Ethics Institute’s weekly AI Ethics Brief that will help you keep up with the fast-changing world of AI Ethics! Every week we summarize the best of AI Ethics research and reporting, along with some commentary. More about us at montrealethics.ai/about.
⏰ This week’s Brief is a ~24-minute read.
Support our work through Substack
💖 To keep our content free for everyone, we ask those who can, to support us: become a paying subscriber for as little as $5/month. If you’d prefer to make a one-time donation, visit our donation page. We use this Wikipedia-style tipping model to support the democratization of AI ethics literacy and to ensure we can continue to serve our community.
*NOTE: When you hit the subscribe button, you may end up on the Substack page where if you’re not already signed into Substack, you’ll be asked to enter your email address again. Please do so and you’ll be directed to the page where you can purchase the subscription.
This week’s overview:
✍️ What we’re thinking:
The “Stanislavsky projects” approach to teaching technology ethics
HAI Weekly Seminar Series: Decolonizing AI with Sabelo Mhlambi
🔬 Research summaries:
The Impact of the GDPR on Artificial Intelligence
Privacy Limitations Of Interest-based Advertising On The Web: A Post-mortem Empirical Analysis Of Google’s FLoC
Responsibility assignment won’t solve the moral issues of artificial intelligence
📰 Article summaries:
Cow, Bull, and the Meaning of AI Essays
Dementia content gets billions of views on TikTok. Whose story does it tell?
Good News About the Carbon Footprint of Machine Learning Training
📖 Living Dictionary:
Internet of Things
🌐 From elsewhere on the web:
Artificial Intelligence interview with Connor Wright
US Congress takes another run at AI accountability
💡 ICYMI
Reliabilism and the Testimony of Robots
But first, our call-to-action this week:
State of AI Ethics Report - Volume 6 - February 2022
If you haven’t had a chance to catch the latest edition of the report yet, we encourage you to grab a copy of the report. It is our most comprehensive report yet touching nearly 300 pages covering:
(1) What we’re thinking
(2) Analysis of the AI Ecosystem
(3) Privacy
(4) Bias
(5) Social Media and Problematic Information
(6) AI Design and Governance
(7) Laws and Regulations
(8) Trends and
(9) Outside the Boxes.
Our goal with these chapters is to provide both an in-depth analysis of each of those areas (but by no means exhaustive given the richness of each of these subdomains) along with a breadth of coverage for those who are looking to save hundreds of hours in trying to parse through the latest in research and reporting in the domain.
Our ask of you this week is if you know of any media outlets who would want to do a feature on our report, please help us out by making an introduction!
✍️ What we’re thinking:
The “Stanislavsky projects” approach to teaching technology ethics
Join us again for some new exciting ideas on how to shape curriculum design in the ethics of tech space. This month Enrico Panai shares his experience as a Data & AI Ethicist and a Human Information Interaction Specialist. Following his studies in philosophy and a multi-year experience as a consultant in Italy, he taught for seven years as an adjunct professor of Digital Humanities in the Department of Philosophy at the University of Sassari. And as always, please get in touch if you want to share your opinions and insights on this fast-developing field.
To delve deeper, read the full article here.
HAI Weekly Seminar Series: Decolonizing AI with Sabelo Mhlambi
Are current data practices taking away more than they are giving? In this webinar series, Mr Sabelosethu Mhlambi touches upon our colonial past and offers a comparison with the modern-day. With a more interrelated approach required, this must start with nations achieving economic independence.
To delve deeper, read the full article here.
🔬 Research summaries:
The Impact of the GDPR on Artificial Intelligence
The report addresses the relationship between General Data Protection Regulation (GDPR) and Artificial Intelligence (AI). Further, the study analyzes how AI is regulated in the GDPR and the extent to which AI fits into the GDPR framework. It discusses the tensions and proximities between AI and data protection principles, particularly that of purpose limitation, and data minimization. The conducts an in-depth analysis of automated decision-making, the safeguards methods to be adopted, and whether data subjects have a right to individual explanations.
To delve deeper, read the full summary here.
FLoC was a new approach to keep the current internet ad ecosystem profitable without third-party cookies while protecting user privacy. Researchers quickly raised alarm bells about potential privacy issues, and few of them were addressed or explored by researchers or Google. In this paper, we empirically examine the privacy risks raised about FloC, finding that FLoC would have allowed individuals to be tracked across the web, contrary to its core aims.
To delve deeper, read the full summary here.
Responsibility assignment won’t solve the moral issues of artificial intelligence
The multitude of AI ethics guidelines published over the last 10 years take for granted certain buzzwords, such as ‘responsibility’, without due inspection into their philosophical foundations and whether they are actually fit for purpose. This paper offers a challenge to the notion that ‘responsibility’ is suitable and sufficient to AI ethics work.
To delve deeper, read the full summary here.
📰 Article summaries:
Cow, Bull, and the Meaning of AI Essays
What happened: The article covers a company called The GoodAI that offers an AI-enabled essay generator and discusses the broader implications of more realistic text-generation capabilities along with the limitations of these technologies today. Diving into the specifics of a piece of text generated on political trends in West Virginia, the author shows that there is much left to be desired to make the text have both relevancy and data, even workaday auto-correct errors were left unchecked in the text. But, the productization of such a capability beyond limited API access (as had been the case for large language models from OpenAI as an example until recently) does change the dynamic for how these capabilities might be used in the wild.
Why it matters: What’s shocking is that the caveats of such a tool and risks of plagiarism amongst others aren’t well-articulated to the target audience of these tools, especially students who may not fully understand what the modus operandi for such a tool is. Given that the academic world is also now employing automated defenses against plagiarism attempts through software like TurnItIn, this seems like a gross oversight. More broadly, the democratization of advanced text-generation technologies has the potential to ratchet up the dissemination and efficacy of problematic information (mis-, dis-, and mal-information). Not only does it allow small actors to achieve scale, but it also makes each of those pieces of text more believable.
Between the lines: We don’t have as much to worry about just yet from the tool discussed in the article because it is at best an incoherent and poor facsimile of what humans can produce but it does point to a worrying trend. While we’ve already struggled with problematic information on social media and its harms on fundamental democratic institutions that we hold dear in society, we will need stronger regulations and governance mechanisms for the market availability of tools that have a strong potential to be misused by malicious actors to undermine the health of our information ecosystem.
Dementia content gets billions of views on TikTok. Whose story does it tell?
What happened: The topic #dementia has been garnering billions of views on TikTok and not all of them are educational material that helps care partners those affected by dementia. Some of the content is centered on conflicts and negative patterns of interactions between the two parties which has raised concerns from both activists and organizations that work with those who suffer from dementia and their care partners. In documenting the story of a TikTok influencer, the care partner altered her social media posting pattern by holding on to a piece of content for 24 hours before posting it to make sure that it was still something that jived with her values and how she wanted to portray the relationship. A sign of the times we live in where every human interaction is up for showcasing, that we must pause and consider why and for whom we engage in this dance in the first place.
Why it matters: People suffering from dementia lose their ability to provide informed consent as symptoms intensify. Documenting their lives and sharing it on social media while they are able to provide consent might be fine but it is not at all clear if that consent maintains its relevance as things change. While those who have been given power of attorney do have legal rights to act on behalf of that person, it is not clear if that legal right should extend into one’s digital affairs. Social media platforms do offer controls to manage our digital presence in case of one’s passing, but there is no guidance or controls for when someone is still alive but unable to operate their identity, say due to something like dementia. And this is only exacerbated because this is not something that is talked about often in the care plan of those individuals, even with their family members.
Between the lines: Social media is reshaping society in the image of what works best to keep eyeballs glued to a screen with a view to extract every last ounce of our attention and eke out profit from that attention grab. Dynamics within households and how we interact with each other are rapidly evolving as we carve our real-world activities to keep up with what is expected of us in our digital versions. Care partners and how they interact with those for whom they are rendering care is not fodder for entertainment and is a wholly private matter which has been gamified unfortunately by these social media dynamics to the detriment of both those with dementia and their care partners. Educational videos that help inform are alright but the virality arising from vulgarization of this relationship in social media is certainly very damaging.
Good News About the Carbon Footprint of Machine Learning Training
What happened: Google researchers put forth the 4M framework to mitigate the carbon footprint of ML systems which includes selecting efficient models, using optimized machines (hardware), leveraging mechanization which is using cloud-computing to lean on efficiencies from scale, and finally map optimization allowing users to pick region’s with carbon-efficient energy sources. They walk through an example to showcase how they were able to achieve a 747x (yes, you read that right!) improvement. Following this framework can set the AI ecosystem on a path where we really do make sustainability a first-class citizen in the AI lifecycle.
Why it matters: The clarification in terms of underlying mechanics on the software level (with details on neural architecture search) and the hardware level (with the operating characteristics of TPUs) helps to correct errors in previous research that over-reported the scale of computation that was consumed for open-sourcing the Evolved Transformer model. What really stands out is the importance of having more transparent access to what really goes on when open-sourcing large-scale models, and guiding concrete technical and policy research based on that. The UMass study referenced in this work had made some assumptions which were not clearly stated as such in later reporting on that research work thus leading to more sensationalist claims than was actually the case.
Between the lines: AI systems have a massive carbon footprint and that is intimately linked to the societal impacts of these systems as well. This research presents a concrete approach to mitigate the environmental impacts but hopefully this doesn’t divert our attention away from indiscriminate deployment of AI systems just because they become cheaper, financially and environmentally, because they still have broad-scale societal implications and as is the case with the democratization of any general-purpose technology, more widespread use comes with both positive and negative consequences.
📖 From our Living Dictionary:
“Internet of Things”
👇 Learn more about why it matters in AI Ethics via our Living Dictionary.
🌐 From elsewhere on the web:
Artificial Intelligence Interview with Connor Wright
Our Partnerships Manager Connor Wright was interviewed in this series by Machine Learning Africa who regularly hold conversations on Artificial Intelligence with thought leaders in Africa.
US Congress takes another run at AI accountability
The Montreal AI Ethics Institute is proud to endorse the Algorithmic Accountability Act 2022 alongside other organizations including Accountable Tech, Aerica Shimizu Banks, Center for Democracy and Technology, Color of Change, Consumer Reports, Credo AI, the Electronic Privacy Information Center, Fight for the Future, IEEE, JustFix, OpenMined, and Parity AI.
💡 In case you missed it:
Reliabilism and the Testimony of Robots
In this paper, the author Billy Wheeler asks whether we should treat the knowledge gained from robots as a form of testimonial versus instrument-based knowledge. In other words, should we consider robots as able to offer testimony, or are they simply instruments similar to calculators or thermostats? Seeing robots as a source of testimony could shape the epistemic and social relations we have with them. The author’s main suggestion in this paper is that some robots can be seen as capable of testimony because they share the following human-like characteristic: their ability to be a source of epistemic trust.
To delve deeper, read the full summary here.
Take Action:
State of AI Ethics Report - Volume 6 - February 2022
If you haven’t had a chance to catch the latest edition of the report yet, we encourage you to grab a copy of the report. It is our most comprehensive report yet touching nearly 300 pages covering:
(1) What we’re thinking
(2) Analysis of the AI Ecosystem
(3) Privacy
(4) Bias
(5) Social Media and Problematic Information
(6) AI Design and Governance
(7) Laws and Regulations
(8) Trends and
(9) Outside the Boxes.
Our goal with these chapters is to provide both an in-depth analysis of each of those areas (but by no means exhaustive given the richness of each of these subdomains) along with a breadth of coverage for those who are looking to save hundreds of hours in trying to parse through the latest in research and reporting in the domain.
Our ask of you this week is if you know of any media outlets who would want to do a feature on our report, please help us out by making an introduction!