AI Ethics Brief #62: Patterns of practice, the AI Act, doubts on ethical AI design adoption, and more ...
What are some of the most pressing questions in AI governance today?
Welcome to another edition of the Montreal AI Ethics Institute’s weekly AI Ethics Brief that will help you keep up with the fast-changing world of AI Ethics! Every week we summarize the best of AI Ethics research and reporting, along with some commentary. More about us at montrealethics.ai/about.
⏰ This week’s Brief is a ~11-minute read.
Support our work through Substack
💖 To keep our content free for everyone, we ask those who can, to support us: become a paying subscriber for as little as $5/month. If you’d prefer to make a one-time donation, visit our donation page. We use this Wikipedia-style tipping model to support the democratization of AI ethics literacy and to ensure we can continue to serve our community.
*NOTE: When you hit the subscribe button, you may end up on the Substack page where if you’re not already signed into Substack, you’ll be asked to enter your email address again. Please do so and you’ll be directed to the page where you can purchase the subscription.
This week’s overview:
✍️ What we’re thinking:
Patterns of Practice will be fundamental to the success of AI governance
🔬 Research summaries:
The European Commission’s Artificial Intelligence Act (Stanford HAI Policy Brief)
Experts Doubt Ethical AI Design Will Be Broadly Adopted as the Norm Within the Next Decade
📰 Article summaries:
Five Recommendations For Creating More Ethical AI
The Limits of Differential Privacy (and Its Misuse in Data Release and Machine Learning)
The World Needs Deepfake Experts to Stem This Chaos
After Repeatedly Promising Not to, Facebook Keeps Recommending Political Groups to Its Users
But first, our call-to-action this week:
The Challenges of AI Ethics in Different National Contexts
The Montreal AI Ethics Institute is partnering with I2AI to host a discussion about the different challenges that national contexts can present when trying to implement AI ethics. For example, different cultural views, differing levels of AI literacy and differing levels of resources available to dedicate to such implementation.
📅 June 30th (Wednesday)
🕛 11 AM –12:30PM EST
✍️ What we’re thinking:
From the Founder’s Desk:
Patterns of Practice will be fundamental to the success of AI governance
(Featured in the AI Governance Report by the Shanghai Institute for Science of Science)
From a practitioner’s perspective, there are numerous challenges that one faces when they encounter abstract principles coupled with business pressures and deadlines to deliver products and services on time and with high quality. It is at these points that there is a breakdown in the actual operationalization of the AI governance mechanisms which needs to be fixed.
To delve deeper, read the full article in the report here.
🔬 Research summaries:
The European Commission’s Artificial Intelligence Act (Stanford HAI Policy Brief)
With the recently released Artificial Intelligence Act in the EU, a lively debate has erupted around what this means for different AI applications, companies building these systems, and more broadly, the future of innovation and regulation. Schaake provides an excellent overview of the Act with an analysis of the implications and sentiments around this Act, including global cooperation between different regions of the world like the US and EU.
To delve deeper, read the full summary here.
Experts Doubt Ethical AI Design Will Be Broadly Adopted as the Norm Within the Next Decade
How would you answer the following question: “By 2030, will most of the AI systems being used by organizations of all sorts employ ethical principles focused primarily on the public good?” An overwhelming majority (68%) say no, and there are more than just ethical reasons why this is the case.
To delve deeper, read the full summary here.
📰 Article summaries:
Five Recommendations For Creating More Ethical AI
What happened: The article proposes some foundational steps that will be important in creating more ethical AI systems and make them a normalized practice. Hiding behind technicalities and shirking moral obligations as not part of the leadership role should be avoided. Instead, the author asks companies to make long-term research commitments and working with monied partners who understand this approach rather than asking them to take shortcuts. Empowering employees not only so that they can raise issues as they arise but also so that they can propose innovative solutions and see them implemented is another crucial step. Finally, being transparent about one’s approach to AI ethics and holding oneself accountable for following that approach will also help build public trust in one’s work.
Why it matters: Having more actionable approaches to AI ethics, especially guidance for leadership, will be essential for the actual implementation of these ideas in practice. The shortlist provided here serves as a reminder to practitioners and researchers in AI ethics that the organizational challenges are just as significant as the technical and socio-technical challenges in building more ethical, safe, and inclusive AI systems.
Between the lines: I’ve found the approach undertaken at Microsoft as outlined in this WEF Case Study to effectively marry the organizational, technical, and socio-technical methods to achieve Responsible AI objectives. We need more examples where Responsible AI methods and recommendations such as the ones highlighted in this article are trialed and analyze those results to learn what works and what doesn’t.
The Limits of Differential Privacy (and Its Misuse in Data Release and Machine Learning)
What happened: We pitch the concept of differential privacy as a silver bullet to solve our struggle with wanting to share data (in the interest of building publicly beneficial technologies) with the desire to have strong privacy protections. Yet, there is no free lunch. Differential privacy has shortcomings: the more substantial the privacy protections, the less utility we get from the data as tuned by the epsilon parameter in the differentially private analysis. As per the original paper, we bring meaningful privacy protections when epsilon values stay less than 1 (lower values are better for privacy). Still, a lot of current uses of differential privacy use values as high as 30. In addition, the fundamental formulation of differential privacy works to protect individual records in a pool of records of many individuals. We violate this basic notion when we apply differential privacy to healthcare data from devices in conjunction with federated learning because all the records coming from a single device belong to the same person.
Why it matters: This sort of in-depth technical analysis and challenging dominant assumptions in the field is crucial if we want to achieve responsible AI in practice rather than just pay lip-service to it by articulating a set of principles.
Between the lines: Even though there is a somewhat valid diatribe against technical practitioners proposing solutions to address ethical challenges in AI, we cannot work without their expertise and help. We risk creating requirements and legislations with a limited understanding of the limits of proposed solutions, leading to more harm in the long-run.
The World Needs Deepfake Experts to Stem This Chaos
What happened: In Myanmar, a video made claims that amplified corruption charges against Aung San Suu Kyi, but because of its grainy quality and the general distrust in government, people decried it as being a deepfake. People used online deepfake detectors to figure out the video’s authenticity, and social media quickly made this opinion popular. The author of this article points out concerns in how malicious agents can easily manipulate untrained everyday citizens into believing whatever they want as the quality of deepfakes increase.
Why it matters: While the risk from deepfakes remains highest for unwanted, nonconsensual sexual images, their use for political manipulation is on the rise. Everyday citizens unaware of the limitation of deepfake detection run the risk of counter-forensic techniques that inject artifacts into videos to confound these free, online tools. Encouraging amateur forensics online can lead people down conspiracy rabbit holes exacerbating the problem of misinformation online.
Between the lines: Sam rightly points out that more advanced capabilities are limited to elite circles of academia, government, and industry in Europe and North America. We need more funding and sharing of knowledge and tools with other parts of the world, especially those vulnerable to such attacks. Inequity in the distribution of these capabilities will deepen the digital divide across regions.
After Repeatedly Promising Not to, Facebook Keeps Recommending Political Groups to Its Users
What happened: In The Markup’s Citizen Browser project, which tracks the Facebook feeds of users paid by The Markup to send them data, researchers discovered that despite promises made by Facebook that they will stop recommending political groups to users, they haven’t done so yet. In several responses to government agencies and in public, Facebook has claimed that they have applied measures to eliminate such recommendations but have let slip on occasion that they cannot do so entirely.
Why it matters: As documented in the article, about two-thirds of the people landing on politically-motivated Facebook groups arrive there through the recommendations made by the platform to its users. If, even after making public commitments to remedy that, we don’t see changes, then that is a cause for severe concern.
Between the lines: As we’ve mentioned in this newsletter before, work from organizations like The Markup can help to hold companies like Facebook accountable. But, this requires funding and innovative research methods, especially when there aren’t broad-access APIs available to researchers to scrutinize the activity on the platform.
From elsewhere on the web:
AI Governance In 2020: A year in review: observations from 52 global experts
Our founder, Abhishek Gupta, was amongst a group of experts from around the world who were invited to contribute to this report talking about AI governance. His piece is titled: Patterns of Practice will be fundamental to the success of AI governance.
To delve deeper, read the full article in the report here.
In case you missed it:
The Epistemological View: Data Ethics, Privacy & Trust on Digital Platform
Understanding the implications of employing data ethics in the design and practice of algorithms is a mechanism to tackle privacy issues. This paper addresses privacy or a lack thereof as a breach of trust for consumers. The authors draw on how data ethics can be applied and understood depending on who the application is used for enlists and can build different variations of trust.
To delve deeper, read the full report here.
Take Action:
Events:
The Challenges of AI Ethics in Different National Contexts
The Montreal AI Ethics Institute is partnering with I2AI to host a discussion about the different challenges that national contexts can present when trying to implement AI ethics. For example, different cultural views, differing levels of AI literacy and differing levels of resources available to dedicate to such implementation.
📅 June 30th (Wednesday)
🕛 11 AM –12:30PM EST