The AI Ethics Brief #178: How AI Regulation Can Both Harm and Foster Social Justice
We further explore our State of AI Ethics Report with Part II: Social Justice & Equity, while highlighting the latest addition to our AI Policy Corner on Ukraine's AI regulation whitepaper
Welcome to The AI Ethics Brief, a bi-weekly publication by the Montreal AI Ethics Institute. We publish every other Tuesday at 10 AM ET. Follow MAIEI on Bluesky and LinkedIn.
📌 Editor’s Note
Credit: Clarote & AI4Media / https://betterimagesofai.org / https://creativecommons.org/licenses/by/4.0/
In this Edition (TL;DR)
Social Justice & Equity: This week, we dig deeper into Part II of the SAIER, Volume 7, featuring insights from experts across Canada, the US and the UK. We explore how some nefarious actors exploit AI to promote disinformation, how pockets of resistance are forming to fight for algorithmic justice, and how AI infrastructure is being developed at the detriment of the natural environment.
Ukraine’s AI regulation proposal: This week, our AI Policy Corner with GRAIL at Purdue University analyzes the two-stage process to AI regulation and development outlined by Ukraine’s 2024 whitepaper, explaining how these two stages connect into the broader geopolitical context.
What connects these stories:
From the internet, GPS, Apple’s AI assistant Siri, WD-40, to drones and more, many commonplace technologies and products today have their origins in military settings. It is no surprise, then, that the ongoing war between Russia and Ukraine has informed the Ukrainian regulatory stance on AI: restricting the use of AI in the defence sector will only put Ukraine in a “less favourable position compared to the aggressor [Russia],” which plans to continue to use AI in its military applications.
The above example, and Ukraine’s AI whitepaper in general, emphasize the global context of top-down AI governance initiatives that are shaped in different directions according to different parties. In recent weeks, we have seen this unfolding in global policy, whereby the EU has been criticized for undermining the AI Act and leaning towards corporate interests.
Nevertheless, a swathe of grassroots-driven resistance initiatives is emerging in front of these top-down governance mechanisms. These initiatives are varied in nature to reflect the wide influence of AI systems in different sectors such as surveillance, health, education and more. They seek to start asking why these AI applications are so widespread, lauded as a series of optimization tools that streamline otherwise slow and inefficient systems and institutions. Examining these applications with a critical lens are the topics Part II of the SAIER tackles, highlighting the diversity of responses to AI-enabled injustices.
🎉 SAIER is back - Part II: Social Justice & Equity
On November 4th, 2025, we launched SAIER Volume 7, a snapshot of the global AI ethics arena, with perspectives from Canada, the US, Africa, Asia and Europe. Of course, not all voices could be captured, but SAIER Volume 7 presents a grounded outlook on the current state of play in the world of AI ethics.
Part II tackles the enormous topic of social justice, reminding us that AI technologies are not created nor used in a vacuum but are laden with human values that lead to tangible social consequences. AI technologies thus embody a range of threats to, and opportunities for, social institutions. Just this month, we have seen the culmination of an electoral process seeped in “AI slop”: the election of New York City’s mayor, Zohran Mamdani. Indeed, two other contenders for the position had used generative AI tools in different ways, from writing up housing plans to calling potential voters in different languages with the contender’s voice to producing racist videos targeted at Mamdani and people of colour.
In light of this event, Rachel Adams from the University of Cambridge explains at the start of Chapter 4 that democracy is under attack. Specifically, Adams describes attacks on election processes enabled by “AI slop” across the Global South, and a response from Big Tech that culminated in the 2024 AI Elections Accord, which critics called “voluntary and uneven.” In response to Big Tech’s inaction, fact-checking networks have emerged, with Africa Check leading the way. Other efforts harness technology to further this effort. Linda Solomon Wood, from Canada’s National Observer, introduces us to “Civic Searchlight,” a tool that analyzes transcripts of municipal meetings across Canada and uncovers patterns. The tool has been valuable to both identify misinformation campaigns and municipalities’ different approaches to tackling similar issues, thereby enabling potential new partnerships and collaborations. Mozilla Foundation’s Seher Shafiq subsequently offers a word of caution against automating electoral processes, explaining how generative AI tools disengage marginalised voters.
Chapter 5 digs deeper into three areas of AI-enabled injustice: state policy, surveillance and education. Before each area, pockets of resistance are forming. Blair Attard-Frost, from the University of Alberta & Alberta Machine Intelligence Institute, challenges state-led, top-down “governance fixes.” Indeed, evolving stories across governments, such as the UK’s “AI growth zones” and the EU’s simplification of digital rules, highlight how corporate interests are being protected by traditional governance processes. Attard-Frost introduces decentralized approaches that counteract the ongoing direction of travel across policing, the arts and labour. Jess Reia, Digital Technology for Democracy Lab (University of Virginia), follows on by arguing against traditional top-down governance mechanisms, which have come to target transgender individuals and activism specifically via facial recognition systems, content moderation policies and surveillance technologies. Adnan Akbar, from tekniti.ai, then offers an example from the education sector with an essay on students’ experiences of AI solutions at university. As it turns out, large language models may only have a limited role in supporting students, who ultimately benefit from conversations with tutors. Akbar provides a valuable backdrop for understanding last week’s story about a UK government-funded apprenticeship designed to help students become cybersecurity experts or software engineers, which only caused frustration when they were constantly exposed to AI-generated materials.
Further case studies are presented by the contributors to Chapter 6 on AI surveillance, privacy, and human rights. Maria Lungu, from the University of Virginia, shares grassroots initiatives led by citizens who have opposed or sought to inform AI applications in public services in the US. Roxana Akhmetova, from the University of Oxford, provides a glimpse of the global scale of the problem at hand, listing digital surveillance systems implemented in 2025 across Australia, China, Costa Rica, the EU, Kenya, Mexico, Nigeria, the UK, the US, Vietnam and Zimbabwe. Jake Wildman-Sisk, a lawyer, then provides a case study from Canada, where a ruling against facial recognition tool developer Clearview has been interpreted differently by different bodies, creating an uneven policy landscape in Canada.
Underpinning all of these discussions around AI technologies is its very material reality. Consequently, Chapter 7 tackles the crucial issue of the environmental impacts of AI technologies and infrastructures. Burkhard Mausberg and Shay Kennedy, from Small Change Fund, provide data about the electricity and water usage, and CO2 emissions of data centres, which provide AI models with the infrastructure they need to run. Novel hyperscale data centres, they suggest, may even reshape rural landscapes. Among other figures, they highlight that the International Energy Agency projects that data centres will emit 1 to 1.4% of global CO2 emissions in the next decade. Trisha Ray, from Atlantic Council, expands the ecosystem impacted by AI. Ray highlights the upstream supply chain of AI, which relies on the mining of critical minerals and the pertinent transportation infrastructure. As a result, instances of local resistance are on the rise. Toxic waste is added to the mix of environmental impacts in Priscila Chaves Martínez’s analysis. Health studies have shown a link between e-waste sites and greater infant mortality in Sub-Saharan Africa, whilst journalists have denounced the dynamics of power and corruption that let hyperscalers Microsoft, Amazon and Google skip environmental impact reports during droughts in Chile, Mexico and Spain.
In sum, Part II of the SAIER challenges the disconnect between how AI is perceived and what AI is. The power dynamics and narratives that fuel AI investments and excitement are coming under scrutiny. Decentralized, grassroots movements are forming to challenge instances of AI-mediated injustice. Whether they focus on impacts on trans rights, the spread of misinformation, impacts on workers, the change of rural landscapes or the destruction of natural ecosystems, these movements are central to advancing AI technologies that protect human rights and human flourishing, rather than forming part of their corrosion.
💭 Insights & Perspectives:
AI Policy Corner: Reviewing Ukraine’s Whitepaper on Artificial Intelligence Regulation
This article is part of our AI Policy Corner series, a collaboration between the Montreal AI Ethics Institute (MAIEI) and the Governance and Responsible AI Lab (GRAIL) at Purdue University. The series provides concise insights into critical AI policy developments from the local to international levels, helping our readers stay informed about the evolving landscape of AI governance. This piece analyzes the two stage process to AI regulation and development outlined in Ukraine’s White Paper on Artificial Intelligence Regulation (Version for Consultation), explaining how these two stages connect into the nation’s broader geopolitical context.
The whitepaper is intended to inform policies for an environment in which Ukraine’s goals of business competitiveness, human rights protection, and European integration are supported while protecting its defense AI sector from regulation. A bottom-up approach is proposed that encompasses two stages; the first being a preparatory stage that allows for industry and state planning, followed by a second stage that introduces binding statutes aiming to gradually replicate the EU’s Artificial Intelligence Regulation Act.
To dive deeper, read the full article here.
❤️ Support Our Work
Help us keep The AI Ethics Brief free and accessible for everyone by becoming a paid subscriber on Substack or making a donation at montrealethics.ai/donate. Your support sustains our mission of democratizing AI ethics literacy and honours Abhishek Gupta’s legacy.
For corporate partnerships or larger contributions, please contact us at support@montrealethics.ai
✅ Take Action:
Have an article, research paper, or news item we should feature? Leave us a comment below — we’d love to hear from you!





The focus on grassroots movements forming in response to top-down governance is particularly timely. Seeing initiatives like Africa Check and Civic Searchlight emerge suggests that the narrative around AI is shifting from pure techno-optimism to something more grounded in community needs. The enviromental impact data you included about data centres is sobering, it makes the material costs of these systems imposible to ignore.
Thanks for an informative read. I recently co-authored a research paper that I think would be of interest to this newsletter and its readers: https://www.researchgate.net/publication/396328369_The_Quiet_Displacement_of_Social_Values_in_AI_Policy#fullTextFileContent