AI Ethics Brief #66: AI in different national contexts, legal implications of GitHub Copilot, weaponization of app data, and more ...
Would you trust your life to a shot detection AI algorithm that is untested?
Welcome to another edition of the Montreal AI Ethics Institute’s weekly AI Ethics Brief that will help you keep up with the fast-changing world of AI Ethics! Every week we summarize the best of AI Ethics research and reporting, along with some commentary. More about us at montrealethics.ai/about.
⏰ This week’s Brief is a ~10-minute read.
Support our work through Substack
💖 To keep our content free for everyone, we ask those who can, to support us: become a paying subscriber for as little as $5/month. If you’d prefer to make a one-time donation, visit our donation page. We use this Wikipedia-style tipping model to support the democratization of AI ethics literacy and to ensure we can continue to serve our community.
*NOTE: When you hit the subscribe button, you may end up on the Substack page where if you’re not already signed into Substack, you’ll be asked to enter your email address again. Please do so and you’ll be directed to the page where you can purchase the subscription.
This week’s overview:
✍️ What we’re thinking:
A Social and Environmental Certificate for AI Systems published in Branch Magazine by the Climate Action Tech group
📅 Event summaries:
Top 5 takeaways from our conversation with I2AI on AI in different national contexts
📰 Article summaries:
Analyzing the Legal Implications of GitHub Copilot
Facebook Tells Biden: ‘Facebook Is Not the Reason’ Vaccination Goal Was Missed
The Inevitable Weaponization of App Data Is Here
Police Are Telling ShotSpotter to Alter Evidence From Gunshot-Detecting AI
But first, our call-to-action this week:
This week’s newsletter is a shorter edition as our staff is all hands-on-deck preparing the State of AI Ethics Report Volume 5 that will be released next week! If you haven’t yet, we strongly encourage you to explore the previous editions of the report to catch up with what has happened in the field of AI ethics since we started publishing these reports.
✍️ What we’re thinking:
From the Founder’s Desk:
A Social and Environmental Certificate for AI Systems published in Branch Magazine by the Climate Action Tech group
As more countries realize the potential AI has to offer in terms of economic opportunities, large societal problems have also been lumped under the category of things that can be “solved” using AI. This is reflected in the national AI strategies of various countries where grandiose claims are made that if only we throw enough computation, data, and the pixie dust of AI on it, we will be able to solve, among other things, the climate crises that looms large over our heads.
AI systems are not without their flaws. There are many ethical issues to consider when thinking about deploying AI systems into society—particularly environmental impacts3.
Of course, readers of this publication are no strangers to such grandiose claims every time there is a new policy or technical instrument that is proposed and ends up falling short of meaningfully addressing the climate crisis. There is no silver bullet!
To delve deeper, read the full article here.
📰 Article summaries:
Analyzing the Legal Implications of GitHub Copilot
What happened: With the recent release of the AI-powered pair programming / code-completion tool from GitHub, many have raised concerns about whether the outputs from the system have copyright infringements. This article provides some nuance stating that despite the licensing on specific repositories, under the terms of service of hosting code on GitHub, there wouldn’t be a strict violation per se. Furthermore, given the length of code outputs from the system, for smaller snippets, there might not be copyrightable material, as the interviewee describes them as Lego blocks that are common everywhere in the programming ecosystem. Finally, from a legal standpoint, there are arguments to be made similar to how Google Books used copyrighted book material as a part of its service allowing people to search books. This was acceptable because it was a “transformative” use and created new value that was different from the original text in the books themselves.
Why it matters: As tools like this become more common and more powerful, especially being able to produce longer segments of working and coherent code, the legal implications of such code generation will become more relevant. Precedents like Google’s use of copyrighted book material serves as a very loose analogue and we’ll need more scholarship and legal precedents before we are able to better understand the implications of tools like Copilot.
Between the lines: A lot of the incensed discussions on Twitter and elsewhere have focused on the surface-level argument that they feel a lot of the longer code snippets are just reproductions of code snippets from the training corpus. The paper published by OpenAI explains that the probability of that happening is approximately 0.1%. More so, a lot of the longer code snippets that have been generated are what is called boilerplate code (code that isn’t a direct copy-paste but it requires little cognitive effort) meaning that there is a diminished risk of copyright infringement since such boilerplate code is made freely available on tutorial pages for packages and libraries. We need a lot more nuance in the discussion before we are able to say anything definitively about the legal and other implications of tools like Copilot.
Facebook Tells Biden: ‘Facebook Is Not the Reason’ Vaccination Goal Was Missed
What happened: The US had planned to have 70% of their population vaccinated by July 4, but it fell short of the target and the Biden administration laid some of that blame on the misinformation spread on Facebook as a cause for the vaccine hesitancy in the US. The platform responded by saying that they have undertaken many measures that have helped to inform the users of Facebook about vaccination such as notices and eradication of anti-vaccination ads on their platform. An adversarial dynamic is emerging between the administration and the social media platform as they are frustrated with each other’s understanding of the efforts being made.
Why it matters: While it is not uncommon for such a divergence to emerge, the lack of transparency in the impacts of the efforts, especially in response to the continued concerns that misinformation is still spreading rapidly on the platform through groups. This is a continual platform where people who are already believers in conspiracy theories and other false content are suggested anti-vaccination groups given the meta-alignment. But, this only exacerbates the problem. A lot of engagement happens in these groups and until the platform is able to dramatically reduce these recommendations in addition to its other efforts, we will continue to see the problem prevail.
Between the lines: We need to find better ways of engaging the technology and government stakeholders in our information ecosystem. The stronger the adversarial dynamic, the more the risk of irreconcilable differences and non-resolution of the core issues. More structured experiments and transparency around the results from the efforts undertaken by the platform will help us build a better understanding of what actions are going to be effective in our fight against the infodemic which will ultimately help us fight the pandemic.
The Inevitable Weaponization of App Data Is Here
What happened: A Substack publication called The Pillar bought location data from a data broker and combining it with data from the Grindr app outed a priest as potentially gay which led to his resignation. Even anonymized data without names attached to a specific person can be used to obtain information about a specific person. A very small number of location points are required to uniquely identify a person because of the patterns that we all follow in the places we visit and where we spend most of our time: our homes and offices.
Why it matters: Apps like Grindr defend themselves by saying that what was mentioned in the article that let to the ouster of the priest is “technically infeasible”, the problem is that there are plenty of companies that offer data consulting services towards “identity resolution” as a way of unearthing data about specific individuals from troves of data that are sold by data brokers.
Between the lines: What was previously the domain of highly-resourced organizations is something that anyone with a little bit of money and motivation can execute with ease. Data brokers collect large amounts of data from all the apps that have any sort of in-app advertising and then package and sell that data over to anyone willing to fork over a few dollars. This is still a largely unregulated industry and calls from Senators like Wyden in the US to bring the force of the FTC to regulate this domain are essential if we want to get rid of the scourge of sensitive data exposing intimate details of our lives.
Police Are Telling ShotSpotter to Alter Evidence From Gunshot-Detecting AI
What happened: An AI-powered tool that is used to detect whether shots were fired in a neighborhood was used as evidence in a case in Chicago but the accused was acquitted when it was discovered through cross-examination and deeper investigation that the alerts from the system were modified to better align with the narrative that was being presented by the prosecution. As the article goes on to show, this wasn’t the first time that this happened, and that trust in the system has been declining over time. In particular, there are many false alerts that are issued by the system, but more so that the “humans-in-the-loop” that work for the company receive requests from law enforcement to dive deeper and have modified the actual alerts to bolster the case being presented against the defendant.
Why it matters: In matters of someone’s life, relying on flimsy evidence, especially one that is not subject to more rigorous tests and supported by studies that have been funded by unrestricted funding from the very company selling the tool to attest to its efficacy should be taken with more than just a grain of salt. Just as we wouldn’t trust a thoroughly tested DNA test as evidence in a court, digital forensics tools should face similar scrutiny. As the article mentions, the city of Chicago is the second-largest client for the company and with their contract coming up for renewal, a more unbiased, and scientifically grounded analysis should be conducted before engaging their services again.
Between the lines: Something that really jumped out in the article was the mention of how unevenly such systems are deployed across different neighborhoods in the city with Latinx, Black, and Brown neighborhoods facing the brunt of this form of policing while being notably absent from more affluent and White neighborhoods. More so, residents of these policed neighborhoods raise a very pertinent point: if law enforcement just asked them if a shot was fired, as responsible neighbors, they would share that with them rather than having to rely on flimsy technology.
📅 Event summaries:
Top 5 takeaways from our conversation with I2AI on AI in different national contexts
Can the world unite under a global AI regulatory framework? Are different cultural interpretations of key terms a sticking point? These questions and more formed the basis of our top 5 takeaways from our meetup with I2AI. With such a variety of nations present, it shows that while we have different views on various issues, this is not a bad thing at all.
To delve deeper, read the full summary here.
From our Living Dictionary:
‘Differential privacy’
👇 Learn more about why it matters in AI Ethics via our Living Dictionary.
From elsewhere on the web:
Our founder Abhishek Gupta is now serving as the Chair of the Standards Working Group at the Green Software Foundation where he will be working on creating industry-wide consensus on methodology and tooling to assess the environmental impacts of software systems.
To learn more and get involved, feel free to message Abhishek here.
A Social and Environmental Certificate for AI Systems published in Branch Magazine by the Climate Action Tech group
As more countries realize the potential AI has to offer in terms of economic opportunities, large societal problems have also been lumped under the category of things that can be “solved” using AI. This is reflected in the national AI strategies of various countries where grandiose claims are made that if only we throw enough computation, data, and the pixie dust of AI on it, we will be able to solve, among other things, the climate crises that looms large over our heads.
AI systems are not without their flaws. There are many ethical issues to consider when thinking about deploying AI systems into society—particularly environmental impacts3.
Of course, readers of this publication are no strangers to such grandiose claims every time there is a new policy or technical instrument that is proposed and ends up falling short of meaningfully addressing the climate crisis. There is no silver bullet!
To delve deeper, read the full article here.
In case you missed it:
Challenging Truth and Trust: A Global Inventory of Organized Social Media Manipulation
As social media platforms not only allow political actors to reach massive audiences, but also to fine-tune target audiences by location or demographic characteristics, they are becoming increasingly popular domains to carry out political agendas. Governments across the world—democratic and authoritarian alike—are expanding the capacity and sophistication of their “cyber troops” operations to capitalize on this medium of communication. In this report, Samantha Bradshaw and Philip N. Howard document the characteristics of 48 countries’ computational propaganda campaigns. While the size, funding, and coordination capacities of each country’s online operations vary, one thing remains clear, regardless of location: social media platforms face an increased risk of artificial amplification, content suppression, and media manipulation.
To delve deeper, read the full summary here.
Take Action:
This week’s newsletter is a shorter edition as our staff is all hands-on-deck preparing the State of AI Ethics Report Volume 5 that will be released next week! If you haven’t yet, we strongly encourage you to explore the previous editions of the report to catch up with what has happened in the field of AI ethics since we started publishing these reports.