AI Ethics Brief #90: Analysis of European AI Act, reputation laundering, robot trust, AI making moral choices for us, and more ...
Facial Recognition – Can It Evolve From A “Source of Bias” to A “Tool Against Bias”?
Welcome to another edition of the Montreal AI Ethics Institute’s weekly AI Ethics Brief that will help you keep up with the fast-changing world of AI Ethics! Every week we summarize the best of AI Ethics research and reporting, along with some commentary. More about us at montrealethics.ai/about.
⏰ This week’s Brief is a ~25-minute read.
Support our work through Substack
💖 To keep our content free for everyone, we ask those who can, to support us: become a paying subscriber for as little as $5/month. If you’d prefer to make a one-time donation, visit our donation page. We use this Wikipedia-style tipping model to support the democratization of AI ethics literacy and to ensure we can continue to serve our community.
*NOTE: When you hit the subscribe button, you may end up on the Substack page where if you’re not already signed into Substack, you’ll be asked to enter your email address again. Please do so and you’ll be directed to the page where you can purchase the subscription.
This week’s overview:
✍️ What we’re thinking:
Applying the TAII Framework on Tesla Bot
Europe : Analysis of the Proposal for an AI Regulation
Facial Recognition – Can It Evolve From A “Source of Bias” to A “Tool Against Bias”
🔬 Research summaries:
Can we trust robots?
📰 Article summaries:
American Spy Agencies Are Struggling in the Age of Data
Exposed documents reveal how the powerful clean up their digital past using a reputation laundering firm
AI Is Already Making Moral Choices for Us. Now What?
📖 Living Dictionary:
AI Consciousness
🌐 From elsewhere on the web:
The Montreal AI Ethics Institute staff was a proud contributor to the draft of the Algorithmic Accountability Act 2022 from Senators Wyden, Booker, and Clarke!
The state of AI ethics: The principles, the tools, the regulations
The State of AI Ethics in 2022: From principles to tools via regulation. Featuring Montreal AI Ethics Institute Founder / Principal Researcher Abhishek Gupta
Beyond single-dimensional metrics for digital sustainability
💡 ICYMI
Disaster City Digital Twin: A Vision for Integrating Artificial and Human Intelligence for Disaster Management
But first, our call-to-action this week:
State of AI Ethics Report - Volume 6 - February 2022
If you haven’t had a chance to catch the latest edition of the report yet, we encourage you to grab a copy of the report. It is our most comprehensive report yet touching nearly 300 pages covering:
(1) What we’re thinking
(2) Analysis of the AI Ecosystem
(3) Privacy
(4) Bias
(5) Social Media and Problematic Information
(6) AI Design and Governance
(7) Laws and Regulations
(8) Trends and
(9) Outside the Boxes.
Our goal with these chapters is to provide both an in-depth analysis of each of those areas (but by no means exhaustive given the richness of each of these subdomains) along with a breadth of coverage for those who are looking to save hundreds of hours in trying to parse through the latest in research and reporting in the domain.
Our ask of you this week is if you know of any media outlets who would want to do a feature on our report, please help us out by making an introduction!
✍️ What we’re thinking:
Applying the TAII Framework on Tesla Bot
Could Tesla implement trustworthy AI? This article discusses the possible implementation of the TAII Framework for the new prototype of a humanoid robot called Tesla Bot presented by Elon Musk for next 2022. The intent is to merge technological innovation with trustworthy AI guidelines to incorporate ethical principles in AI systems.
To delve deeper, read the full article here.
Europe : Analysis of the Proposal for an AI Regulation
Artificial intelligence (AI) is one of the technologies that will be able to optimize existing activities (improve purchase predictions) or create new opportunities (autonomous cars). However, the development of AI is not without risk (not at all). As part of its Digital Agenda, the European Commission proposed, on 21 April 2021, a Proposal for a Regulation to harmonize the rules surrounding the use of AI-based applications (89 explanatory recitals and 85 articles). The aim is to create a trusting environment for the development and use of AI systems in Europe.
To delve deeper, read the full article here.
Facial Recognition – Can It Evolve From A “Source of Bias” to A “Tool Against Bias”
A recent announcement by Meta about terminating the face recognition system in Facebook sparked worldwide attention. It comes as a sort of new reality for many Facebook users, who have been habituated for years to the automatic people recognition feature in Facebook photos and videos.
Since the arrival of mankind on earth, facial outlook has remained as the most common identifier for humans. Facial recognition is a dominating technology being used in numerous applications. Its benefits have been amazing. At the same time, it has been one of the most debated topics, particularly from ethical perspectives.
To delve deeper, read the full article here.
🔬 Research summaries:
Can robots be trusted? Machines form a more significant part of our social and political existence, proving to be more than mere tools. To manage such encroachment, a deeper understanding of trust as a concept is the first step.
To delve deeper, read the full summary here.
A message from our sponsor this week:
All datasets are biased.
This reduces your AI model's accuracy and includes legal risks.
Fairgen's platform solves it all by augmenting your dataset, rebalancing its classes and removing its discriminative patterns.
Increase your revenues by improving model performance, avoid regulatory fines and become a pro-fairness company.
Want to be featured in next week's edition of the AI Ethics Brief? Reach out to masa@montrealethics.ai to learn about the available options!
📰 Article summaries:
American Spy Agencies Are Struggling in the Age of Data
What happened: The world of intelligence gathering and analysis has changed significantly since when the intelligence agencies were conceived, we are now in an always-on, always-connected world with every consumer device collecting some sort of data that can contribute to intelligence analysis work. Several threats emerge such as smaller non-state actors possessing the potential to disrupt democratic processes by flooding social media with disinformation. Satellite imagery which was hitherto in the domain of government now has many commercial players collecting and processing views from the sky that can both serve and hinder intelligence operations around the world. The landscape has evolved dramatically and the work that the intelligence agencies have to do is reminiscent of the Red Queen problem.
Why it matters: Geographic borders don’t provide any barriers to flow of information and the volume of our data exhaust is increasing exponentially, all exacerbating the problems faced by intelligence agencies to keep up. Private companies have also become huge repositories of data; this imposes challenges for intelligence agencies who might want to look at this data but have to be careful in not overstepping the bounds set by law. At times, even the tools being used in the domain come from outside, such as contracts to firms like Palantir to obtain advanced data processing and insights generation capabilities.
Between the lines: When it comes to matters of national security, we would like for our intelligence agencies to be up with the times, but given the entanglement of how data is generated (by everyone walking around with a smartphone), where it is stored (often private servers of private corporations), and its limitless movements between jurisdictions, the challenges will only compound as the intelligence community seeks to thread the very fine line between overreach and reasonable processing and analysis.
What happened: Diving into the world of the reputation management industry, we learn that powerful actors who want to gloss over past misdeeds are using firms like Eliminalia to get websites to take down content by leaning on provisions as per the DMCA (US legislation designed to prevent copyright theft) that makes it effective, easy, and low-cost to operate and take down content. The company has clients ranging from crypto-scammers to ministers, all of whom presumably want to paper over their corrupt deeds and escape (to whatever extent possible) from the negative consequences of their actions. Such companies have seen a gold rush (with each URL takedown request being billed to customers at USD 2800) with the advent of the “right to be forgotten” in the GDPR and preying on the lack of resources and high risk that server hosts and media companies face from DMCA. The fallout from the Panama Papers only bolstered the volume of such requests for these reputation management companies.
Why it matters: This is yet another tool in the arsenal of the corrupt and those who run afoul of law. Particularly problematic is the case of government officials who use this to conceal misdeeds and prevent the electorate from learning the truth that might prevent them from holding public office, which they could use to perpetrate even more illegal activities (perhaps the very ones that they are trying to conceal).
Between the lines: While such activity only skews the balance of power further towards those who already have resources, the structure of law today doesn’t aid much in alleviating the situation either. As documented in the article, the fact that companies who are at the receiving end of these requests potentially face significant legal risks if a request is valid and they chose not to reply to it means that they are more than likely to honor the requests and take down content. The dirty tactics used by reputation management companies who buy up other web assets to copy-paste segments of the content that they want taken down and back-date them to fortify their claims of DMCA violations only add to the problem. Digital forensics aren’t all that effective yet in sussing out such web assets that have been solely created to support dubious claims rather than representing legitimate websites.
AI Is Already Making Moral Choices for Us. Now What?
What happened: “Delphi is a research prototype designed to model people’s moral judgments on a variety of everyday situations” reads the website description of the AI system covered in this article. With a few stark warnings on what the limitations of the systems are when you navigate to its page, we find that upon entering a statement, the system “makes a moral judgment” and replies. In some cases the answers are satisfactory, in others they aren’t. But, compared to a large language model like GPT-3, the Delphi system has an almost 40% higher rating by scorers. When the authors of the paper tweeted out their work, there was a very spirited debate that ensued online. Outsourcing moral judgment to a machine is not something that people take lightly.
Why it matters: The system is built on responses given to common situations by crowdsourced workers from the US, a dataset called the CommonSense Norm Bank. A clear limitation is the geographic limit, but there is also the temporal aspect, which is that the answers are reflective of the state of morality in the US at that time and may become stale as the world changes. Also, it is biased in only including mostly Western philosophies. More data might improve the system, but the article mentions that it might even make the system worse since there is a perhaps left-leaning set of people who were included in the dataset creation and a more realistic distribution might be quite different.
Between the lines: By no means is such a system a replacement for making judgments on actual right and wrong (to the extent that even the most well-studied of us can be). Even so, given the diversity of ethical and moral thought, we don’t even have a good grasp on how a single system is supposed to cater to those differences, especially when there might be contradictions based on the system that you choose (say between deontology, virtue ethics, or utilitarianism). Above all, such a system still has inherent limitations in the length of the text that it can parse, and being able to grab all the nuance and context that is needed in making a well-informed decision. Even when humans engage in a process, there are a plethora of data points and background knowledge that we use in arriving at our conclusions, certainly something that can’t be captured in a text box on a website when using the system.
📖 From our Living Dictionary:
“AI Consciousness”
👇 Learn more about why it matters in AI Ethics via our Living Dictionary.
🌐 From elsewhere on the web:
Wyden, Booker and Clarke Introduce Algorithmic Accountability Act of 2022 To Require New Transparency And Accountability For Automated Decision Systems
Legislation Requires Assessment of Critical Algorithms and New Public Disclosures; Bill Endorsed by AI Experts and Advocates; Bill Will Set the Stage For Future Oversight by Agencies and Lawmakers
Washington, D.C. – U.S. Senator Ron Wyden, D-Ore., with Senator Cory Booker, D-N.J., and Representative Yvette Clarke, D-N.Y., today introduced the Algorithmic Accountability Act of 2022, a landmark bill to bring new transparency and oversight of software, algorithms and other automated systems that are used to make critical decisions about nearly every aspect of Americans’ lives.
The state of AI ethics: The principles, the tools, the regulations
Our founder, Abhishek Gupta, was featured in this exclusive article by VentureBeat talking about his work in AI ethics and the recently published State of AI Ethics Report - Volume 6 - February 2022. This is a great way to quickly learn about the highlights of the report!
In this exhaustive interview, our founder, Abhishek Gupta, dives into the domain of AI ethics and discusses what the state of the ecosystem is and where it is heading in 2022.
What do we talk about, when we talk about AI Ethics? Just like AI itself, definitions for AI ethics seem to abound.
From algorithmic and dataset bias, to the use of AI in asymmetrical / unlawful ways, to privacy and environmental impact.
All of that, and more, could potentially fall under the AI Ethics umbrella.
We try to navigate this domain, and lay out some concrete definitions and actions that could help move the domain forward.
Beyond single-dimensional metrics for digital sustainability
Multi-dimensional, rich metadata-supplemented metrics are our best shot at implementing actions that actually make software greener.
💡 In case you missed it:
With the potential for a climate crisis looming ever larger every day, the search for solutions is essential. One such solution could then be found in the construction of a disaster city digital twin, allowing governments to simulate the potential effects of different disasters on physical cities through running simulations on their digital replicas. Doing so can help make more tangible the effects of a climate crisis, whether it be through flooding or unusual weather. AI, as a result, has a key part to play in making this a reality.
To delve deeper, read the full summary here.
Take Action:
State of AI Ethics Report - Volume 6 - February 2022
If you haven’t had a chance to catch the latest edition of the report yet, we encourage you to grab a copy of the report. It is our most comprehensive report yet touching nearly 300 pages covering:
(1) What we’re thinking
(2) Analysis of the AI Ecosystem
(3) Privacy
(4) Bias
(5) Social Media and Problematic Information
(6) AI Design and Governance
(7) Laws and Regulations
(8) Trends and
(9) Outside the Boxes.
Our goal with these chapters is to provide both an in-depth analysis of each of those areas (but by no means exhaustive given the richness of each of these subdomains) along with a breadth of coverage for those who are looking to save hundreds of hours in trying to parse through the latest in research and reporting in the domain.
Our ask of you this week is if you know of any media outlets who would want to do a feature on our report, please help us out by making an introduction!