AI Ethics Brief #78: Animism, avoiding oppressive ML futures, trust in medical AI, and more ...
Do Americans need a bill of rights in an AI-powered world?
Welcome to another edition of the Montreal AI Ethics Institute’s weekly AI Ethics Brief that will help you keep up with the fast-changing world of AI Ethics! Every week we summarize the best of AI Ethics research and reporting, along with some commentary. More about us at montrealethics.ai/about.
⏰ This week’s Brief is a ~18-minute read.
Support our work through Substack
💖 To keep our content free for everyone, we ask those who can, to support us: become a paying subscriber for as little as $5/month. If you’d prefer to make a one-time donation, visit our donation page. We use this Wikipedia-style tipping model to support the democratization of AI ethics literacy and to ensure we can continue to serve our community.
*NOTE: When you hit the subscribe button, you may end up on the Substack page where if you’re not already signed into Substack, you’ll be asked to enter your email address again. Please do so and you’ll be directed to the page where you can purchase the subscription.
This week’s overview:
✍️ What we’re thinking:
The machine’s rage against the planet
🔬 Research summaries:
Governance of Artificial Intelligence
Animism, Rinri, Modernization; the Base of Japanese Robotics
Who is afraid of black box algorithms? On the epistemological and ethical basis of trust in medical AI
Avoiding an Oppressive Future of Machine Learning: A Design Theory for Emancipatory Assistants
📰 Article summaries:
AI fake-face generators can be rewound to reveal the real faces they trained on
Three predictions for the future of responsible technology
Americans Need a Bill of Rights for an AI-Powered World
Applying arms-control frameworks to autonomous weapons
📖 Living Dictionary:
Algorithmic Pricing
🌐 From elsewhere on the web:
The Invisible Elephant in the Room: The Trustworthy ML Un-Symposium
💡 ICYMI
Language (Technology) is Power: A Critical Survey of “Bias” in NLP
But first, our call-to-action this week:
The Montreal AI Ethics Institute is committed to democratizing AI Ethics literacy. But we can’t do it alone.
Every dollar you donate helps us pay for our staff and tech stack, which make everything we do possible.
With your support, we’ll be able to:
Run more events and create more content
Use software that respects our readers’ data privacy
Build the most engaged AI Ethics community in the world
Please make a donation today.
✍️ What we’re thinking:
The machine’s rage against the planet
Our founder, Abhishek Gupta, shared his thoughts on how AI systems impact the environment and how we can be more sustainable in their design, development, and deployment in the Expert Speak series of the Observer Research Foundation (ORF).
“Marc Andreessen famously said that software is ‘eating’ the world, and now we have AI eating up software. However, in this original formulation, the ‘world’ represented the economic slice of the world: How businesses operated and the profits they made were the core concern. With the push towards the triple bottom line, where all three Ps—profit, people and the planet—are taken into consideration, we must re-examine how AI is eating our planet!”
To delve deeper, read the full article here.
🔬 Research summaries:
Governance of Artificial Intelligence
The various applications of AI not only offer opportunities for increasing economic efficiency and cutting costs, but they also present new forms of risks. Therefore, in order to maximize the benefits derived from AI while minimizing its threats, governments’ worldwide need to understand the scope and the depth of the hazards posed by it, and develop regulatory processes to address these challenges. This paper describes why the governance of AI should receive more attention, considering the myriad challenges it presents.
To delve deeper, read the full summary here.
Animism, Rinri, Modernization; the Base of Japanese Robotics
Technology is not going anywhere anytime soon, so why not respect it for what it is? The approach adopted by Japanese culture is to recognize how natural and technological phenomena have a soul that intertwines with ours. The result: a beautiful sight of human-technological relations indeed.
To delve deeper, read the full summary here.
The use of AI in medicine promises to advance this field and help practitioners make faster and more accurate diagnosis and reach more effective decisions about patient’s care. Unfortunately, this technology has also come with a specific set of ethical and epistemological challenges. This paper aims at shedding some light on these issues and providing solutions to tackle the problems connected to using AI in clinical practice. We ultimately concur with the authors of the paper that medical AI cannot and should not replace physicians. We also add that a trustworthy AI will probably lead to more trust among humans and increase our reliance on experts. Thus, we propose that we start looking at the question: under what conditions is an AI system conducive to more human-to-human trust?
To delve deeper, read the full summary here.
Avoiding an Oppressive Future of Machine Learning: A Design Theory for Emancipatory Assistants
Broad adoption of machine learning systems could usher in an era of ubiquitous data collection and behaviour control. However, this is only one potential path for the technology, argue Gerald C. Kane et al. Drawing on emancipatory pedagogy, this paper presents design principles for a new type of machine learning system that acts on behalf of individuals within an oppressive environment.
To delve deeper, read the full summary here.
📰 Article summaries:
AI fake-face generators can be rewound to reveal the real faces they trained on
What happened: The article covers a recent paper that utilized membership inference attacks to determine what face images might have been used in training a facial recognition technology system. There are many websites like This Person Does Not Exist that offer AI-generated faces by utilizing GANs, but some of them resemble real people too closely. The paper sought to demonstrate that by generating faces from the GAN and then using a separate facial recognition system to see if any of them were a match.
Why it matters: Such a technique has the potential to allow people to check if their image has been used in training an AI system. But, it also showcases latent vulnerabilities in such systems when they can leak what kind of data was used to train them. Especially when you have pre-trained models that are re-used downstream by other developers. Other techniques like model inversion and model stealing, falling in the broad category of machine learning security demonstrate such weaknesses in AI systems today and provide us with a pathway towards building more robust AI systems.
Between the lines: The area of machine learning security today is highly under-explored with most of the focus on issues like fairness and privacy, which while important don’t cover the full gamut of ethical issues with AI systems. We need to ensure that the systems are robust as well and today we are in the early stages of machine learning security attacks and defenses as was the case with cybersecurity for more traditional software systems a few years ago.
Three predictions for the future of responsible technology
What happened: Providing a quick overview of the work taking place at WEF on responsible technology, the article lays out three trends that they believe will come to pass in the field including investment efforts taking on responsible development as a pillar in assessing the quality of investments just as ESG became a criterion for assessments. They also believe that we are just at the beginning of targeted regulations and will only see them adopted in more countries in the world. Finally, they also see higher education making tech ethics a mandatory part of various curricula.
Why it matters: Broad adoption of responsible practices in technology design, development, and deployment is what is missing today and the three pillars identified by WEF do provide a good high-level roadmap for attacking this problem in a multi-pronged approach. What we need to think about in addition to this is what are the incentive structures that will actually enable these trends to come to pass, rather than requiring constant push to make them a reality, but ones that will evaporate without support.
Between the lines: Diving into just teaching AI ethics in higher education, there are many courses that are being developed and provided at universities with each pursuing this in their own manner. We have done a deep dive into this area through our series “Office Hours” that highlights how some educators are going about this. As for making this a criterion within investment assessments, I think we are still a long way away from that because there aren’t yet enough market forces that call for such evaluations and the make-up of most investment shops continues to lean away from being diverse enough to acknowledge these problems in the first place. But, just as was the case with ESG criteria, I firmly believe that utilizing monetary incentives through investments will push the industry towards responsible technology practices faster than without it.
Americans Need a Bill of Rights for an AI-Powered World
What happened: From the Office of Science and Technology Policy in the US, this article makes a strong case for including in the Bill of Rights considerations for how technology, especially AI, impacts the ability of people to enjoy their freedoms and exercise their rights. They make the case that codifying that technology respects fundamental democratic values will help us adhere to the rights and freedoms that people are entitled to without leaving it up to market forces and private interests doing so out of goodwill. There are precedents when the Bill of Rights has been reinterpreted, reaffirmed, and expanded to keep up with the times as changes happened in society powered by technology and otherwise.
Why it matters: What is different with the current wave of technology, in particular AI, is the scale and pace of its impact. Hitherto it took a while before technology moved from labs to products, but that timeline has now been shortened down to a few months with integrated research labs within industry firms. The internet and smartphones with ample compute and storage become ready vectors for the dissemination of these technological advances; far more rapidly than ever before, not allowing us a chance to grasp their impact before they embed themselves into all facets of our lives.
Between the lines: It is great to see the leaders of government institutions at the highest levels taking a deep interest in how technology is shaping our society and seeking to make some fundamental changes to the operating system of our democracies so that we take a more active role in addressing the impacts that this technology has on us rather than succumbing to fatalism and treating the march of AI into all parts of our lives as an inevitability.
Applying arms-control frameworks to autonomous weapons
What happened: With the recent utilization of an autonomous weapon by Israel to assassinate the top nuclear scientist in Iran and last year the use of an automated drone by Turkey to target members of the Libyan National Army, discussions on autonomous weapons and their limitations and capabilities are gaining steam. The article makes the case that leaning on existing arms controls treaties as models can help regulate this field. Namely, it points out how the Ottawa Convention on Anti-personnel landmines provided a good starting point to bring together actors in the space for more fruitful discussions later. Building consensus and gaining momentum through targeted treaties that can help separate the concerns that militaries have in giving up on weapons vs. those that we want to absolutely stop the proliferation of will be a meaningful outcome from such an approach.
Why it matters: Autonomy in weapons systems can be something as simple as a sensor that is able to detect changes in the environment, some computing capability to act on changes signalled by the sensor, and then dispensing the payload of the weapon based on that computation. This spans the gamut from simple pressure-triggered landmines to the more sophisticated swarm drones that are being created by national militaries in their pursuit of dominance on the battlefield. The big concern raised by anyone participating in the domain comes down to how much autonomy and what meaningful human control looks like in these scenarios, and we don’t yet have any concrete answers to these questions.
Between the lines: The problem with such approaches to regulation always come down to how strictly they can be enforced, and whether all countries who sign onto this will uphold the same high standards of robustness and verification that are required for safe operations. There are calls to completely ban such weapons but resistance emerges from some countries who claim that while they might halt such work, there are those who won’t and their lackadaisical approach might cause more net harm. And this ultimately fuels the arms race where each pushes to develop the technology defensively but in the process furthers the state-of-the-art. Hopefully, those efforts while still being pursued are aimed towards making these systems safer rather than more lethal and unethical.
From our Living Dictionary:
‘Algorithmic Pricing’
👇 Learn more about why it matters in AI Ethics via our Living Dictionary.
From elsewhere on the web:
The Invisible Elephant in the Room: The Trustworthy ML Un-Symposium
The Trustworthy ML Initiative celebrates its one year anniversary with this special event, held jointly with Montreal AI Ethics Institute.
To achieve the promise of AI as a tool for societal impact, black-box models must not only be "accurate" but also satisfy trustworthiness properties that facilitate open collaboration and ensure ethical and safe outcomes. The purpose of this un-symposium is to discuss the interdisciplinary topics of robustness, explainability, fairness, privacy, and ethics of AI tools. In particular, we want to highlight the significant gap in deploying these AI models in practice when the stakes are high for commercial applications of AI where millions of human lives are at risk. While these challenges look critical, we believe we can overcome this by developing trustworthy models through the collective effort of researchers, stakeholders, and domain experts, all of whom we welcome to this un-symposium.
In case you missed it:
Language (Technology) is Power: A Critical Survey of “Bias” in NLP
With the recent boom in scholarship on Fairness and Bias in Machine Learning, several competing notions of bias and different approaches to mitigate their impact have emerged. This incisive meta-review from Blodgett et al dissects 146 papers on Bias in Natural Language Processing (NLP) and identifies critical discrepancies in motivation, normative reasoning and suggested approaches. Key findings from this study include mismatched motivations and interventions, a lack of engagement with relevant literature outside of NLP and overlooking the underlying power dynamics that inform language.
To delve deeper, read the full summary here.
Take Action:
The Montreal AI Ethics Institute is committed to democratizing AI Ethics literacy. But we can’t do it alone.
Every dollar you donate helps us pay for our staff and tech stack, which make everything we do possible.
With your support, we’ll be able to:
Run more events and create more content
Use software that respects our readers’ data privacy
Build the most engaged AI Ethics community in the world
Please make a donation today.