The AI Ethics Brief #54: Slow AI, Shadow bans, hard choices in using AI in medicine, and more ...
Can we get along with robots?
Welcome to another edition of the Montreal AI Ethics Institute’s weekly AI Ethics Brief that will help you keep up with the fast-changing world of AI Ethics! Every week we summarize the best of AI Ethics research and reporting, along with some commentary. More about us at montrealethics.ai/about.
⏰ This week’s Brief is a ~12-minute read.
Support our work through Substack
💖 To keep our content free for everyone, we ask those who can, to support us: become a paying subscriber for as little as $5/month. If you’d prefer to make a one-time donation, visit our donation page. We use this Wikipedia-style tipping model to support the democratization of AI ethics literacy and to ensure we can continue to serve our community.
*NOTE: When you hit the subscribe button, you may end up on the Substack page where if you’re not already signed into Substack, you’ll be asked to enter your email address again. Please do so and you’ll be directed to the page where you can purchase the subscription.
This week’s overview:
✍️ What we’re thinking:
From the Founder’s Desk: The importance systems adaptability for meaningful Responsible AI deployment
The Sociology of AI Ethics: Slow AI and the Culture of Speed
🔬 Research summaries:
Making Kin with Machines
📰 Article summaries:
Shadow Bans, Dopamine Hits, and Viral Videos, All in the Life of TikTok Creators (The Markup)
Want to Get Along With Robots? Pretend They’re Animals (Wired)
Hard Choices: AI in healthcare (Yale Medicine)
Google Promised Its Contact Tracing App Was Completely Private—But It Wasn’t (The Markup)
But first, our call-to-action this week:
Register for our next meetup: Self-disclosure for an AI product: A practical workshop and feedback session
We’re partnering with Open Ethics to host a discussion about self-disclosure for participants to learn about, and for companies to demonstrate their AI product self-disclosure process. This includes looking at how an AI product was built and the downstream effects that may have on people using it.
📅 May 13th (Thursday)
🕛 1:00PM – 2:30PM EST
🎫 Get free tickets
✍️ What we’re thinking:
From the Founder’s Desk:
The importance systems adaptability for meaningful Responsible AI deployment by Abhishek Gupta
So far we've covered a few ideas on how to deploy Responsible AI including "The importance of goal setting in product development to achieve Responsible AI", "Tradeoff determination for ethics, safety, and inclusivity in AI systems", and "Systems Design Thinking for Responsible AI" which have shown us that borrowing ideas from adjacent and related domains can aid in the process of making Responsible AI a reality rather than a pipe dream.
Let's use this post to explore the idea of Systems Adaptability as another tenet that can help us achieve Responsible AI in practice.
To delve deeper, read the full article here.
The Sociology of AI Ethics:
Slow AI and the Culture of Speed
The sociology of speed considers how people experience temporality in different contexts and how humans make meaning out of the social construct of time. John Tomlinson argues that we’ve moved from a culture of “mechanical speed” that dominated the 19th century as the Western world industrialized to “telemediated speed” at the turn of this century. This “immediacy” is an accelerated experience of time where information, goods, and people can be accessed immediately and effortlessly anywhere, anytime. While this is not categorically undesirable, Tomlinson considers how this imperative limits our imaginaries of the good life.
To delve deeper, read the full article here.
🔬 Research summaries:
Indigenous epistemologies are able to develop the ethical frameworks and principles to understand how to build and see our relationship with AI and machines. The essay discusses how we can understand creating kinship with AI through Indigenous epistemologies while valuing respect and reciprocity. The authors draw upon Cree, Lakota, and Hawaiian cultural knowledge to create and acknowledge the responsibility to include computational creation in the circle of relationships.
To delve deeper, read the full summary here.
📰 Article summaries:
Shadow Bans, Dopamine Hits, and Viral Videos, All in the Life of TikTok Creators (The Markup)
What happened: TikTok has been accused in the past of taking a heavy-handed approach to content moderation. As noted in the article, they are known to employ a mixture of human and machine approach (common to a lot of platforms) but whereas platforms like Facebook embody a “keep” approach, TikTok has been known to be swift with content takedowns. Yet, there are other, more effective ways to deploy content moderation, notably through shadow bans whereby content is silently downgraded from appearing in consumer feeds. Such bans are often predicated on characteristics of the content creators like their looks and skin colour which leads to discrimination.
Why it matters: The opacity of how the TikTok algorithm operates (as is also the case for other platforms) has led to a lot of speculation and unwanted behaviours from content creators who are consistently stressed about how to maintain earnings which are predicated on the number of views they get on the platform.
Between the lines: As more self-help groups emerge addressing the woes of content creators, such as those on Reddit, we might start to level the playing field between the corporations and people using these platforms. Until then, informal fora and tips & tricks might be the best bet for content creators to fight against the unfairness of the system.
Want to Get Along With Robots? Pretend They’re Animals (Wired)
What happened: Kate Darling, a robot ethicist from MIT, talks about how we can reframe the conversation on human-machine interaction through the lens of the relationship between humans and animals. Drawing from historical texts when animals were tried for wrongdoings in courts, Darling makes a comparison that comparing robots to animals helps to give us a more nuanced perspective: animals have acted alongside humans as co-workers, companions, soldiers, etc. where they have brought complementary skills to the fore.
Why it matters: This reframing is important because it helps us better understand not only how we interact with robots but also how we think about issues like accountability when it comes to technological failures harming humans. It also helps move away from frequent dystopian narratives into something that is more realistic in the role that robots play, and will play, in our society.
Between the lines: Discussions like these help to move the field forward especially in an environment today where we have frothy discussions about robot rights and other conversations that try to grapple with problems that aren’t quite relevant or might not be relevant in the near future. We have real AI systems that are deployed in the world today interacting with humans, manipulating them to different ends. Finding new ways to have these discussions and inviting people from various backgrounds will help to elevate the level of the conversation.
Hard Choices: AI in healthcare (Yale Medicine)
What happened: In a discussion with researchers from the renowned Yale Interdisciplinary Center for Bioethics, this article highlights how issues of doctor’s agency and problems of amplification of bias are at the heart of using AI in healthcare. Readers of the AI Ethics Brief are already familiar with bias concerns in AI systems, but the problem of reduction in the agency of doctors as they start relying more on machines is grave. This can happen both intentionally (when doctors might try to avoid litigation in a place like the US where the system has been demonstrated to be right many times in the past) or unintentionally (a sort of learned behaviour, akin to automation bias).
Why it matters: This is critical to examine because of direct implications on human life. Doctors hold a significant role in society where we trust our lives in their hands in our most vulnerable moments. If they are in turn placing that trust into the digital hands of machines, we run the risk of being subject to inexplicable decisions from machines that are built by corporations who might have optimized the machines to achieve goals that may be orthogonal to patient welfare.
Between the lines: Retraining of doctors and education in medical school on the problems such automated systems bring and how doctors can avoid those pitfalls will be essential. To that end, involving AI researchers and developers in bringing that knowledge to medical schools and learning lessons from doctors so that these AI developers can incorporate that in their own system design will be essential if we are to make meaningful progress on building more ethical, safe, and inclusive AI systems for healthcare.
Google Promised Its Contact Tracing App Was Completely Private—But It Wasn’t (The Markup)
What happened: The Exposure Notification Framework which was rolled out jointly by Google and Apple last year as a way of enabling digital contact tracing has been found to have some flaws in its privacy claims. While both companies had said that no private data left the device unless the user elected to do so or identified themselves as being COVID-positive, researchers have otherwise. Bluetooth identifiers and other sensitive information is logged temporarily in system logs on the phone which are accessible to pre-installed apps from the phone manufacturer and could theoretically be exfiltrated to their servers as a part of usage and crash report analytics performed by the device.
Why it matters: Given the large number of people who have downloaded the app based on this Framework from around the world, the potential for leaking sensitive information is large. Device identifiers and other information can be correlated to track people, though both the researchers and Google point out that there is no existing evidence that this has been the case.
Between the lines: The change as mentioned by the researchers is something simple to implement and won’t change the core functioning of the Framework and the apps based on it. But, Google has repeatedly denied the severity of the issue and only after The Markup reached out to them did they take the concerns from the security researchers seriously. Another incident where Google has demonstrated that they respond seriously only when there is a threat of a PR fiasco.
From our Living Dictionary:
‘Monte Carlo methods’
Monte Carlo methods are a class of techniques for randomly sampling a probability distribution.
👇 Learn more about why it matters in AI Ethics via our Living Dictionary.
Explore the Living Dictionary!
From elsewhere on the web:
A (more) visual guide to the proposed EU Artificial Intelligence Act
This post is aimed to give clarity to the main components of the proposal and aid in the understanding of the Proposal for a Regulation on Artificial Intelligence published by the European Commission.
csv,conf,v6 event (community-driven data conference)
It’s an event that’s not literally about CSV file format, but rather about what CSV represents in regards to our wider community ideals (data interoperability, hackability, simplicity, etc.).
Our founder Abhishek Gupta will take a look at why you should have a lab notebook for all your data science work, how you should go about maintaining that lab notebook, and what you should and what you should NOT include in that lab notebook.
In case you missed it:
The State of AI Ethics Report (Volume 4) captures the most relevant developments in AI Ethics since January 2021.
To save you time and quickly get you up to speed on what happened in the past quarter, we’ve distilled the research & reporting around 4 key themes:
Ethical AI
Fairness & Justice
Humans & Tech
Privacy
To delve deeper, read the full report here.
Take Action:
Events:
Self-disclosure for an AI product: A practical workshop and feedback session
We’re partnering with Open Ethics to host a discussion about self-disclosure for participants to learn about, and for companies to demonstrate their AI product self-disclosure process. This includes looking at how an AI product was built and the downstream effects that may have on people using it.
📅 May 13th (Thursday)
🕛 1:00PM – 2:30PM EST
🎫 Get free tickets