AI Ethics Brief #103: Ethical AI startups, de-platforming disinformation, computational reflexivity, how GDPR is failing, and more ...
What is the role of Ubuntu in global AI inclusion discourse?
Welcome to another edition of the Montreal AI Ethics Institute’s weekly AI Ethics Brief that will help you keep up with the fast-changing world of AI Ethics! Every week we summarize the best of AI Ethics research and reporting, along with some commentary. More about us at montrealethics.ai/about.
⏰ This week’s Brief is a ~37-minute read.
Support our work through Substack
💖 To keep our content free for everyone, we ask those who can, to support us: become a paying subscriber for as little as $5/month. If you’d prefer to make a one-time donation, visit our donation page. We use this Wikipedia-style tipping model to support the democratization of AI ethics literacy and to ensure we can continue to serve our community.
*NOTE: When you hit the subscribe button, you may end up on the Substack page where if you’re not already signed into Substack, you’ll be asked to enter your email address again. Please do so and you’ll be directed to the page where you can purchase the subscription.
This week’s overview:
✍️ What we’re thinking:
An Overview of Ethical AI Startups
Connor's Review of "A Lesson From AI: Ethics Is Not an Imitation Game"
🔬 Research summaries:
A Lesson From AI: Ethics Is Not an Imitation Game
Positive AI Economic Futures: Insight Report
De-platforming disinformation: conspiracy theories and their control
Regional Differences in Information Privacy Concerns After the Facebook-Cambridge Analytica Data Scandal
Epistemic fragmentation poses a threat to the governance of online targeting
Model Positionality and Computational Reflexivity: Promoting Reflexivity in Data Science
The role of the African value of Ubuntu in global AI inclusion discourse: A normative ethics perspective
📰 Article summaries:
How 'Zuck Bucks' saved the 2020 election — and fueled the Big Lie
'The Next Pandemic': Recycled Conspiracies Spread Rapidly Amid Monkeypox Outbreak
How GDPR Is Failing
📖 Living Dictionary:
What is an example of algorithmic pricing?
🌐 From elsewhere on the web:
Emerging Problems: New Challenges in FAccT from Research to Practice to Policy
💡 ICYMI
The Robot Made Me Do It: Human–Robot Interaction and Risk-Taking Behavior
But first, our call-to-action this week:
We’d love to hear from you, our readers, on what recent research papers caught your attention. We’re looking for ones that have been published in journals or as a part of conference proceedings.
✍️ What we’re thinking:
An Overview of Ethical AI Startups
There’s no questioning the ubiquity of artificial intelligence (AI). There’s no argument that can be made that AI is not at the core of Western society, part of the fabric of our everyday lives.
Over the past three decades, a greater emphasis has been placed on growth, on scale, on the positive potential of AI. It is now used constantly in every context as a solution to every problem. But times have changed. Only recently have thought leaders in the space transitioned the thinking away from unparalleled growth (AI has grown enough) and towards controlling AI risk — the negative potential for AI to perpetuate bias, generate disinformation, and much more. When AI fails, it fails explosively.
Solutions came in droves as soon as it became clear to investors, governments, and business owners that AI risk can be dangerous and costly and that consumer trust (which directly translates to profit) is increasingly hard-earned in a world with so many cases of AI run rampant. Many ethical AI vendors are in their infancy — startups attempting to combat the wide world of irresponsible AI. The ethical AI space itself is a relatively small blip on the funding worlds’ radar: underrepresented, underfunded, and underrated.
This column is a comprehensive guide to the five different categories of “Ethical AI” startups and the dynamics between them. We will analyze trends, make predictions, and identify the strengths and weaknesses of each category of this fascinating and critical startup ecosystem.
To delve deeper, read the full article here.
Connor's Review of "A Lesson From AI: Ethics Is Not an Imitation Game"
🔬 Research summaries:
A Lesson From AI: Ethics Is Not an Imitation Game
While the Turing test significantly influenced machine intelligence, it didn’t harbour much ethical consideration. With this in mind, we must be careful not to treat ethics as a neatly packaged set of rules we can input into a machine.
To delve deeper, read the full summary here.
Positive AI Economic Futures: Insight Report
It is time to think about positive futures—if leading computer scientists are correct that machines may outperform human beings at every task within 45 years, then it is necessary to consider how we will earn a living, what we do for fun, and how we interact with each other in our future society. At this point, there is a troubling lack of vision regarding what the future should look like, and unless we wish to drift without a compass into an unknown future, we need to engage in interdisciplinary discussions with leading thinkers and scholars—and decide.
To delve deeper, read the full summary here.
De-platforming disinformation: conspiracy theories and their control
Widespread COVID-19 conspiracies and political disinformation prompted Facebook (which now operates under the name Meta) to ramp up countermeasures in 2020. In this paper, crime and security researchers from Cardiff University evaluate the impacts of actions the company took against two prominent COVID-19 conspiracy theorists. Along with assessing the effectiveness of the interventions, the researchers explore how this mode of social control can produce unintended consequences.
To delve deeper, read the full summary here.
While there is increasing global attention to data privacy, most of their understanding is based on research conducted in a few countries of North America and Europe. This paper proposes an approach to studying data privacy over a larger geographical scope. By analyzing Twitter content about the #CambridgeAnalytica scandal, we observe language and regional differences on privacy concerns that hint a need for extensions of current information privacy frameworks.
To delve deeper, read the full summary here.
Epistemic fragmentation poses a threat to the governance of online targeting
This paper argues that online targeted advertising (OTA) is not a benign application of machine learning. It creates a phenomenon the authors denominate epistemic fragmentation where users lose contact with their peers. It is then impossible to assess good or harmful content outside their own.
To delve deeper, read the full summary here.
Model Positionality and Computational Reflexivity: Promoting Reflexivity in Data Science
Computational methods and qualitative thinking are not obvious partners, but while the former helps us automate simple reasoning to quantify phenomena in our data, the latter is essential for framing, defining, and understanding what it is that we quantify. This research draws from feminist qualitative research pedagogy to inform new techniques and data representations the help us go beyond evaluating models in terms of accuracy to evaluating models in terms who the model is accurate for.
To delve deeper, read the full summary here.
Having previously been excluded from the benefits of previous industrial revolutions, in this paper we explore how the global AI ethics community can include views from Sub-Saharan Africa as a way to improve the terms on which African populations and subpopulations and their concerns are included in the global AI ethics discourses to avoid being excluded from the current fourth industrial revolution. Specifically, we argue that the value of Ubuntu could be of immense value in AI applied normative ethics, particularly toward an inclusive approach for the implementation of the universal AI ethics principles and guidelines.
To delve deeper, read the full summary here.
📰 Article summaries:
How 'Zuck Bucks' saved the 2020 election — and fueled the Big Lie
What happened: Back in 2020, Mark Zuckerberg offered a total of $419 million in grants to any election official who wanted one, as long as they spent it in ways that made it easier and safer for everyone to vote (e.g. ballot sorters, drop boxes or poll workers.) Much of the money came directly from Chan and Zuckerberg’s personal funds, rather than the Chan Zuckerberg Initiative (CZI), and it was routed through the Silicon Valley Community Foundation. This particular article focused on the money that the Center for Election Innovation & Research (CEIR) and the Center for Tech and Civic Life (CTCL) received from CZI.
Why it matters: It is ironic that even Zuckerberg himself agreed that individual donors shouldn’t be the ones funding elections. In his October 2020 Facebook post about the grants, he stated that the “government should have provided these funds, not private citizens.” However, this Zuck Bucks story is different. The “chronic starvation of election officials” is a well-known problem, but this unprecedented funding of an election can be traced back to a single Big Tech billionaire.
Between the lines: On one hand, some people have argued that Zuckerberg alone created an imbalance or distortion of the electoral system. To counter this, others have pointed out that, since elections are run and financed at the local level, there’s never been balance to begin with. Studies show that this is partly why districts with more minority voters often have fewer voting machines, leading to longer lines. Underfunding elections in minority districts tends to hurt Democratic turnout. As much of Zuckerberg’s funds went to those districts, the question of how his involvement affected the final results still remains.
'The Next Pandemic': Recycled Conspiracies Spread Rapidly Amid Monkeypox Outbreak
What happened: There has recently been a spike in Monkeypox-related conspiracy content, which have been using the same narrative playbook as past anti-vaccine campaigns across various social media platforms. For example, YouTube videos blaming Bill Gates for the outbreak have gained hundreds of thousands of views, in addition to a video segment on TikTok portraying Bill Gates as the one responsible for the outbreak. Misinformation around LGBTQ communities and the convergence of antivax and Russian state-sponsored content on fringe platforms have also been on the rise.
Why it matters: This article highlighted the vulnerabilities in our information ecosystem, which are increasingly being exploited by conspiracy theorists. Firstly, the ecosystem is built for pre-made content, because it is primed to receive an injection of new and prepared narratives that allow conspiracy theorists to quickly adapt to rapid developments. Secondly, the conspiracy narratives that perform best are built around true events that conspiracy theorists extrapolate. Lastly, although social media platforms have taken several steps to limit the spread of vaccine conspiracy theories, access through 3rd party and new influencers means that conspiratorial narratives are still being amplified.
Between the lines: There are several, very real costs of disinformation that were dismissed years ago but are coming to the forefront today. Take, for example, the decline of quality of information in our public sphere and the amplification of divisions in our society that has created an environment where polarization and tribalism can quickly emerge. One key question that has been in the spotlight recently is: how has the design of platforms contributed to the problems of mis and disinformation?
What happened: The General Data Protection Regulation (GDPR) is a set of seven principles that guide how your data can be handled, stored, and used. These principles apply equally to charities, governments, pharmaceutical companies and Big Tech firms. However, since coming into effect in 2018, there has been a struggle amongst data regulators to enforce the complaints against Big Tech firms in a timely manner. It has reached a point where civil society groups have grown frustrated with GDPR’s limitations, and some countries’ regulators complain the system to handle international complaints slows down proper enforcement.
Why it matters: Throughout the last year, there has been increasing pressure to change how GDPR works. There is no question that GDPR has improved the privacy rights of millions inside and outside of Europe, though it should be noted that major problems still exist. Data brokers can still stockpile your information and sell it, and the online advertising industry poses many risky opportunities. With the way things are going, GDPR could fail to prevent the worst practices of Big Tech companies. To help solve this problem, there have been suggestions to centralize enforcement for big affairs in an effort to reduce the lag on Big Tech.
Between the lines: It is interesting to take a closer look at GDPR’s impact, since it has also improved company behaviors. There is much more awareness around cybersecurity, data protection and privacy. The fact that businesses are increasingly allocating significant budgets to data protection compliance is a positive sign. Unfortunately, that does not detract from the disappointing enforcement of GDPR for Big Tech companies, where the levels of data are notable.
📖 From our Living Dictionary:
What is an example of algorithmic pricing?
👇 Learn more about why it matters in AI Ethics via our Living Dictionary.
🌐 From elsewhere on the web:
Emerging Problems: New Challenges in FAccT from Research to Practice to Policy
Our founder, Abhishek Gupta, will be speaking at the Emerging Problems: New Challenges in FAccT from Research to Practice to Policy workshop on the subject of “Organizational Approaches to FAccT and Cultural Change”.
💡 In case you missed it:
The Robot Made Me Do It: Human–Robot Interaction and Risk-Taking Behavior
Can robots impact human risk-taking behavior? In this study, the authors use the balloon analogue risk task (BART) to measure risk-taking behavior among participants when they were 1) alone, 2) among a silent robot, and 3) among a robot that encouraged risky behavior. The results show that risk-taking behavior did increase among participants when encouraged by the robot.
To delve deeper, read the full article here.
Take Action:
We’d love to hear from you, our readers, on what recent research papers caught your attention. We’re looking for ones that have been published in journals or as a part of conference proceedings.