AI Ethics Brief #114: Sociotechnical specs for AVs, privacy in drone delivery, bothersome bot behaviour in Chile referendum, and more ...
Do you believe that social media is dead?
Welcome to another edition of the Montreal AI Ethics Institute’s weekly AI Ethics Brief that will help you keep up with the fast-changing world of AI Ethics! Every week we summarize the best of AI Ethics research and reporting, along with some commentary. More about us at montrealethics.ai/about.
⏰ This week’s Brief is a ~36-minute read.
Support our work through Substack
💖 To keep our content free for everyone, we ask those who can, to support us: become a paying subscriber for as little as $5/month. If you’d prefer to make a one-time donation, visit our donation page. We use this Wikipedia-style tipping model to support the democratization of AI ethics literacy and to ensure we can continue to serve our community.
NOTE: When you hit the subscribe button, you may end up on the Substack page where if you’re not already signed into Substack, you’ll be asked to enter your email address again. Please do so and you’ll be directed to the page where you can purchase the subscription.
This week’s overview:
✍️ What we’re thinking:
Unstable Diffusion: Ethical challenges and some ways forward
Now I’m Seen: An AI Ethics Discussion Across the Globe
🔬 Research summaries:
Responsible and Regulatory Conform Machine Learning for Medicine: A Survey of Challenges and Solutions
Sociotechnical Specification for the Broader Impacts of Autonomous Vehicles
GAM(e) changer or not? An evaluation of interpretable machine learning models
Routing with Privacy for Drone Package Delivery Systems
Bots don’t Vote, but They Surely Bother! A Study of Anomalous Accounts in a National Referendum
📰 Article summaries:
I tested an AI app for visually impaired people – and this is what I found
Social media is dead
📖 Living Dictionary:
What is a proxy variable?
🌐 From elsewhere on the web:
L’art de copier sans payer
Why Your Organization Needs Sustainable Software
💡 ICYMI
Consent as a Foundation for Responsible Autonomy
But first, our call-to-action this week:
Join us for an AI Ethics meetup in Montreal!
Do you miss the days when the Montreal AI Ethics Institute used to host meetups in Montreal, engaging with our local community on AI ethics at community partner venues? We're bringing them back and would love to know from you in a quick survey here if that is something you'd be interested in and we'll put one on to bring us all together!
✍️ What we’re thinking:
Unstable Diffusion: Ethical challenges and some ways forward
In a short-lived (?) Discord server called Unstable Diffusion, Internet users inclined to use the OSS version of Stable Diffusion for nefarious purposes wreaked havoc generating NSFW images by feeding the system with prompts leading to materials that would instantly be banned elsewhere, such as Instagram and Reddit. Such servers become hotbeds for accumulating a lot of problematic content in a single place, showing both the capabilities of these systems to generate this type of content and connecting malicious users with each other to further their “skills” in the generation of such content.
Users engaged in utilizing prompts with the explicit goal of triggering NSFW outputs, with some users exchanging tips with each other on subreddits on how to (a) get better quality images that aligned with their malicious goals, (b) use different prompting strategies to avoid some of the safety controls, and (c) preserve these outputs and model artifacts in case the server got banned.
While my examination of this egregious behavior stems from studying Reddit communities with broader community members interested in safe uses and ethics of such systems (and not the actual Discord server), three key issues stood out to me: consent and IPR, trauma, and the broader state of society. I also have some suggestions on preventing and mitigating negative outcomes from such systems at the end of the article for those involved in either building or using these systems including paying attention to combinatorial outputs, subversion of safety controls, and investing in accepting feedback from the community.
To delve deeper, read the full article here.
Now I’m Seen: An AI Ethics Discussion Across the Globe
A summary of our panel discussion “Now I’m Seen: An AI Ethics Discussion Across the Globe” with Claudia May del Pozo and Khoa Lam. Touching upon the difference in perspectives within the Mexican and Vietnamese context, the importance of cross-cultural discussion and understanding is evident for all to see.
To delve deeper, read the full article here.
🔬 Research summaries:
To ensure that fundamental principles such as beneficence, respect for human autonomy, prevention of harm, justice, privacy, and transparency are respected, medical machine learning systems must be developed responsibly. This survey provides an overview of the technical and procedural challenges that arise when creating medical machine learning systems responsibly and following existing regulations, as well as possible solutions to address these challenges.
To delve deeper, read the full summary here.
Sociotechnical Specification for the Broader Impacts of Autonomous Vehicles
As automated vehicle (AV) fleets increase in size and capability, they will fundamentally transform how people interact with transportation systems. These effects may include regulating traffic flow, affecting ease of access to different locations, and reshaping the relationship between pedestrians, cyclists, and cars on the roadway. This paper highlights the technical and social problems that arise as more and more features of the transportation system are placed under the control of AV designers.
To delve deeper, read the full summary here.
GAM(e) changer or not? An evaluation of interpretable machine learning models
Generalized additive models (GAMs) are a promising type of interpretable machine learning model. In GAMs, predictor variables are modeled independently in a non-linear way to generate so-called shape functions. Such shape functions can capture arbitrary patterns while remaining fully interpretable. As the community has brought forth several advancements in the field, we investigate a series of innovative GAM extensions and assess their suitability for various regression and classification tasks. More specifically, we examine the prediction performances of five different GAM variants compared to six traditional machine learning models. We demonstrate no strict trade-off between model interpretability and model accuracy for prediction tasks on tabular data.
To delve deeper, read the full summary here.
Routing with Privacy for Drone Package Delivery Systems
Drone package delivery systems may lead to loss of consumer privacy, given current safety regulations. We formalize and analyze these privacy concerns and propose strategies to mitigate these issues. We also examine possible trade-offs between privacy and delivery time.
To delve deeper, read the full summary here.
Bots don’t Vote, but They Surely Bother! A Study of Anomalous Accounts in a National Referendum
In 2020, a national referendum was held in Chile due to the social outburst that exploded in 2019. This paper analyzes the political discussion on Twitter in the three months preceding the referendum. Using machine learning methods, we estimated the referendum stance distribution on Twitter and quantified the influence of bots in the discussion.
To delve deeper, read the full summary here.
📰 Article summaries:
I tested an AI app for visually impaired people – and this is what I found
What happened: Seeing AI is a Microsoft app designed to assist visually impaired individuals by narrating the world around them. Seeing AI involves the collaboration between Microsoft and Haleon, a company that owns healthcare brands such as Sensodyne and Centrum. By using the phone camera, the app can read out snippets of text and identify some brand names that are scanned across the shelves. Once it detects and locks on a certain product, the product information is available almost instantly.
Why it matters: Information such as the product description, ingredient list, and usage instructions is presented in one place. Seeing AI can also read documents out loud and identify money. This is one example of “tech for independence” that is making daily tasks easier and improving accessibility for visually impaired people.
Between the lines: One of the taglines on the homepage of Seeing AI is “turning the visual world into an audible experience.” The added element of accessibility that assistive technology brings to the table is empowering the lives of those who are visually impaired. The author of this article states: “vision impaired people deserve autonomy in the choices we make, and it’s apps such as Seeing AI that make it a reality.”
What happened: The platforms that originally defined "social media," such as Facebook, Instagram, and Twitter, are being replaced by other platforms and new models of online interaction. Streaming platforms, such as YouTube, TikTok, and Twitch, are swapping out poly-directional conversation with a unidirectional broadcast model, which focuses on a creator and their audience.
Why it matters: Rather than analyzing this shift, a radical conclusion that this article arrives at is that true social media doesn’t exist and that it never did. There is an emphasis on these platforms being a “series of communication networks” instead of networks centered on bonds and groups that go beyond content or consumption. For several years, our feeds have not only included posts from friends and family but also content that has been selected by algorithms and promoted by advertisers to increase engagement.
Between the lines: The connections that we find on these platforms generate valuable data which other businesses pay for and use to offer more relevant goods, services, and experiences. An important fact sometimes overlooked is that social media is privately owned. The data we generate and the algorithms that process it belongs to tech companies or other concentrations of capital. “The solution isn’t to simply collectively choose another alternative but to mobilize to undermine the current system, which, again, is hostile at every level to alternatives.”
The lawsuit that could rewrite the rules of AI copyright
What happened: “Microsoft, its subsidiary GitHub, and its business partner OpenAI have been targeted in a proposed class action lawsuit alleging that the companies’ creation of AI-powered coding assistant GitHub Copilot relies on ‘software piracy on an unprecedented scale.’” Copilot is trained on public repositories of code from the web, many of which are published with licenses that require anyone reusing the code to credit its creators. It has recently been found to reiterate long sections of licensed code without providing the necessary credit.
Why it matters: One main argument in this article is that Microsoft uses OpenAI as a shield to avoid liability by filtering the research through this nonprofit to make it fair use. “This is a collective scheme between Microsoft, OpenAI, and GitHub that is not as beneficial or as altruistic as they might have us believe.” This lawsuit is quite significant because it could be the end of open-source licenses.
Between the lines: AI systems are not exempt from the law, and the owners of these systems must remain accountable if we work towards fair and ethical AI. This principle is applied to various products, not only AI. The hope is that companies make licensing deals and bring in content legitimately while training AI in a manner that respects these licenses.
📖 From our Living Dictionary:
👇 Learn more about why it matters in AI Ethics via our Living Dictionary.
🌐 From elsewhere on the web:
Our founder, Abhishek Gupta, was featured by LaPresse on the impact that AI-generated art will have on the work of artists and the field in general.
Si une image vaut mille mots, avec l’intelligence artificielle, il suffit désormais d’écrire une courte phrase dans une boîte de dialogue pour générer des images époustouflantes à l’infini. La Presse est allée à la rencontre d’artistes québécois dont les œuvres ont servi, à leur insu et sans qu’ils y aient consenti, à entraîner un logiciel qui « apprend » un peu trop vite à les imiter.
Why Your Organization Needs Sustainable Software
Our founder, Abhishek Gupta, wrote an article for BCG on how our digital infrastructure probably generates more carbon emissions than you think—and AI may make it worse. It’s time for sustainable software.
💡 In case you missed it:
Consent as a Foundation for Responsible Autonomy
Consent is a central idea in how autonomous parties can achieve ethical interactions with each other. This paper posits that a thorough understanding of consent in AI is needed to achieve responsible autonomy.
To delve deeper, read the full summary here.
Take Action:
We’d love to hear from you, our readers, on what recent research papers caught your attention. We’re looking for ones that have been published in journals or as a part of conference proceedings.