The AI Ethics Brief #172: The State of AI Ethics in 2025: What's Actually Working
SAIER Volume 7 Returns in November 2025 with Practical Solutions from Communities Leading the Way
Welcome to The AI Ethics Brief, a bi-weekly publication by the Montreal AI Ethics Institute. We publish every other Tuesday at 10 AM ET. Follow MAIEI on Bluesky and LinkedIn.
📌 Editor’s Note
In this Edition (TL;DR)
One Question We're Pondering: What does it actually take to move responsible AI from theory to practice, and who is doing that work when no one is watching? We explore the quiet, persistent efforts happening in classrooms, healthcare systems, cities, courtrooms, union halls, and communities that rarely make headlines but are shaping AI's real-world impact.
SAIER Volume 7 Returns: We officially announce the State of AI Ethics Report (SAIER) Volume 7: "AI at the Crossroads: A Practitioner's Guide to Community-Centred Solutions," scheduled for November 4, 2025. After a pause since February 2022, this report focuses on practical, replicable examples of responsible AI implementation grounded in real-world experience rather than aspirational principles.
The Alan Turing Institute Crisis: We examine the UK's premier AI research institute facing potential funding withdrawal unless it pivots to a national security focus, representing a significant loss for independent non-governmental AI research and raising questions about accountability in publicly funded research institutions.
U.S. AI Education Push: Our AI Policy Corner with GRAIL at Purdue University analyzes the April 23rd Executive Order on Advancing AI Education for American Youth, which emphasizes adoption and implementation over risk mitigation, sitting within the broader Trump administration's AI policy framework.
Canadian AI Governance Insights: From the Victoria Forum 2025 and new public opinion data showing Canadians deeply divided on AI (34% beneficial vs. 36% harmful), we explore Canada's unique position in developing democratic AI governance that moves beyond consultation toward co-creation.
What connects these stories: The recognition that responsible AI implementation and AI ethics occur not in boardrooms or policy papers, but through the daily work of people building civic competence and practical solutions at the intersection of technology and community needs.
🔎 One Question We’re Pondering:
What does it actually take to move responsible AI from theory to practice, and who is doing that work when no one is watching?
As conversations around AI governance grow louder (see Brief #170: How the US and China Are Reshaping AI Geopolitics), what we're hearing behind the scenes is quieter, more persistent, and perhaps more urgent. Colleagues across sectors are asking the same thing in different languages: What's working, what isn’t, and what can we learn from both?
Not in theory or in press releases, but in classrooms trying to preserve academic integrity, in healthcare systems navigating algorithmic risk, in cities designing procurement standards, in community-led efforts resisting surveillance they never consented to, and more.
Over the past year, this question has resurfaced repeatedly for us at MAIEI: at Zurich’s Point Zero Forum, the recently held Victoria Forum in British Columbia (more on this below in Insights & Perspectives), in guest lectures at universities, and through hundreds of emails and conversations. It has also been nearly a year since the passing of our dear friend and collaborator, Abhishek Gupta, founder and principal researcher at MAIEI. In that time, this question has become increasingly persistent, and his reminder to keep moving "onwards and upwards" has guided our search for answers as we rebuild and reimagine MAIEI's role in the community.
And yet, the answers are rarely loud. They appear in quiet experiments, shared reflections, and the daily work of people operating at the edges of institutions and at the centre of communities. This persistent need for connection and practical guidance is why we're bringing the State of AI Ethics Report (SAIER) back, and are committed to doing so on an annual basis. Returning to our roots of building civic competence and shaping public understanding on the societal impacts of AI, the SAIER represents both a tribute to Abhishek's legacy and a cornerstone of MAIEI's path forward.
The State of AI Ethics Report Returns

After a pause since February 2022, we’re officially announcing SAIER Volume 7: AI at the Crossroads: A Practitioner's Guide to Community-Centred Solutions, scheduled for release on November 4, 2025.
Following hundreds of conversations and a close review of over 800 pieces published on the MAIEI website since 2018, one insight stood out: the field needs connection and interpretation. There's a growing recognition that isolated efforts across sectors contain valuable knowledge that rarely gets shared or built upon, including lessons from quiet failures that never made headlines.
The world also looks fundamentally different than when Volume 6 was published in February 2022. The ChatGPT paradigm now dominates (see Brief #171: The Contradictions Defining AI's Future for our commentary on GPT-5 and GPT-OSS), reshaping everything from student homework to healthcare diagnostics, from corporate decision-making to creative industries. In an era where foundation models are deployed before safety frameworks are in place, where open-source agents outperform flagship releases, and where community groups write their own rules amid policy gaps, the demand for practical, replicable examples for communities to adapt and adopt has become urgent.
Volume 7 is built on a simple premise: responsible AI has always been as much about capacity as it is about commitment. The gap between theoretical principles and practical implementations rarely reflects a lack of intent, but rather, missing infrastructure, institutional inertia, unclear mandates, or poorly designed incentives that collectively contribute to this challenge. The hard work often falls to those without formal authority, including local organizers, frontline workers, junior engineers, and researchers who work across silos.
We're asking: What does responsible AI implementation look like when it's grounded rather than aspirational? What happens when AI ethics is shaped in classrooms, courtrooms, hospitals, union halls, and local governments? Who is doing the work of making responsible AI stick through innovation, repair, adaptation, and institutional resilience?
Most importantly: What are we willing to let go of to make room for what actually works?
Building Civic Competence Together
Volume 7 represents the MAIEI global community coming together to build civic competence by showcasing practical solutions. We're deeply grateful to all of you, our 17,500+ AI Ethics Brief subscribers, who have made this community possible. Your engagement, questions, and shared insights continue to shape how we approach these critical conversations about AI's role in society.
We hope this report will serve as both a practitioner's guide for policymakers, educators, community organizers, and researchers, and an entry point for anyone seeking to understand the broader landscape of AI ethics and responsible AI implementation in 2025. It's designed to help readers see the forest from the trees, offering both tactical guidance and strategic perspectives on where responsible AI stands today, while serving as a historical artifact for future generations to understand this pivotal moment.
As MAIEI transitions to becoming a financially sustainable organization (see Open Letter: Moving Forward Together – MAIEI’s Next Chapter, December 2024), we're expanding our impact while keeping our work open access, because building public understanding of AI's societal impacts shouldn't be behind paywalls.
Consider joining the SAIER Champion’s Circle:
Paid subscribers to The AI Ethics Brief will be highlighted in the Acknowledgment page of SAIER Volume 7, unless you indicate otherwise. If you're already a subscriber and enjoy reading this newsletter, consider upgrading to directly support this work, be recognized, and help us build the civic infrastructure for long-term impact.
Corporate Partnership Opportunities:
For organizations committed to advancing responsible AI implementation, we're exploring strategic partnerships for SAIER Volume 7. These collaborations allow companies and philanthropic foundations to support independent, community-centred knowledge sharing, while demonstrating a genuine commitment to AI ethics beyond corporate statements. Partnership opportunities include report sponsorship, case study collaboration, and community engagement initiatives. If your organization is interested in supporting this work, please reach out at support@montrealethics.ai
How You Can Contribute to SAIER Volume 7
If you have case studies, policy examples, or practical insights that have been successful (or unsuccessful) in real-world applications, please reach out by responding directly to this newsletter or emailing us at support@montrealethics.ai.
We’re particularly interested in:
Implementation stories that moved beyond paper to practice
Community-led initiatives that addressed AI challenges without formal authority
Institutional experiments that navigated AI adoption under constraints
Quiet failures and the lessons learned from them
We want honest accounts of what it takes to do this work when no one is watching: the blueprints being built quietly, rigorously, and in lockstep with and for the community. We recognize that no single report can fully capture the scope of this field. That’s why we’re actively seeking diverse perspectives for Volume 7: to document what’s working, what isn’t, what often goes unseen, and where the state of AI ethics stands in 2025.
Please share your thoughts with the MAIEI community:
🚨 Here’s Our Take on What Happened Recently
The Alan Turing Institute: Change Needed for Survival
The Alan Turing Institute is facing significant pressure from the UK Government to pivot its focus or risk losing funding. At the end of 2024, 93 workers signed a letter expressing a lack of confidence in the leadership team. In April 2025, the charity announced "Turing 2.0," a pivot focusing on environmental sustainability, health and national security, which would involve cutting up to a quarter of current research projects.
Following the Strategic Defense Review in June 2025, UK Secretary of State for Science and Technology Peter Kyle sent a letter to the institute in July stating that it must focus on national security or face funding withdrawal. This month, workers launched a whistleblowing complaint accusing leadership of “misusing public funds, overseeing a "toxic internal culture", and failing to deliver on the charity's mission.” The institute has also seen high-profile departures, including former Chief Technology Officer Jonathan Starck in May 2025, amid reports that recommendations for modernization from current Chief Executive Jean Innes have not been implemented.
📌 MAIEI’s Take and Why It Matters:
The situation at the Alan Turing Institute represents a missed opportunity. While the institute has produced valuable research on important topics, including children and AI, its current predicament raises serious questions about accountability in publicly funded research institutions.
The broader issue concerns how non-governmental citizen representation in AI research can be better protected to avoid a similar situation. From our perspective, and reflected in this analysis of the institute, accountability is key. The institute’s governance structure across multiple founding universities created challenges in establishing a unified research agenda and central operational responsibility. What has transpired transforms the institute from an independent third-party charity into, in effect, an arm of the UK government, representing a significant loss for non-governmental AI research and independent oversight in the field.
Did we miss anything? Let us know in the comments below.
💭 Insights & Perspectives:
AI Policy Corner: U.S. Executive Order on Advancing AI Education for American Youth
This edition of our AI Policy Corner, produced in partnership with the Governance and Responsible AI Lab (GRAIL) at Purdue University, examines the April 23rd Executive Order on Advancing Artificial Intelligence Education for American Youth, which focuses on integrating AI education into K-12 learning environments. The Executive Order establishes a framework that promotes the benefits of long-term AI usage in education through five key strategies: creating an AI Education Task Force, launching the Presidential Artificial Intelligence Challenge, and fostering public-private partnerships for improving education, training educators on AI applications, and expanding registered apprenticeships in AI-related fields.
While the Order emphasizes workforce development and preparing students for an AI-driven economy, it takes a notably different approach from previous federal AI education initiatives by focusing primarily on adoption and implementation rather than addressing potential risks or safeguards. This education-focused directive sits within the broader context of the Trump administration's AI policy framework, as outlined in the July 2025 AI Action Plan (covered in Brief #170), though it predates that comprehensive strategy by several months.
As schools begin implementing these directives, questions remain about how this approach will address concerns around equitable access, student privacy, and the potential for AI systems to perpetuate educational inequities, issues that become more pressing as AI tools become even more embedded in fundamental learning environments.
To dive deeper, read the full article here.
Victoria Forum 2025 - Beyond Consultation: Building Inclusive AI Governance for Canada’s Democratic Future
At the Victoria Forum 2025, co-hosted by the University of Victoria and the Senate of Canada from August 24-26 in Victoria, BC, MAIEI joined lawmakers, scholars and civic leaders to examine how Canada can shape AI governance rooted in both global competitiveness and democratic values. On a panel moderated by Senator Rosemary Moodie, MAIEI emphasized the need to move beyond consultation toward co-creation, embedding diverse public perspectives into every stage of AI system design. Drawing from MAIEI’s work on building civic competence and shaping public understanding, we framed AI as a socio-technical system, where governance must address both technical and societal impacts of AI. Key insights included Canada’s unique position between global models, the importance of inclusive policymaking that reflects lived experience, and the risks of relying on voluntary standards. The conversation highlighted that truly democratic AI governance demands more than technical fixes. It requires public participation, meaningful inclusion, and policy frameworks that reflect Canada’s social complexity.
To dive deeper, read the full article here.
Canadian Public Opinion: Divided Views on AI's Societal Impact
A comprehensive survey by Leger, reported by Hessie Jones for Forbes, reveals that Canadians remain deeply divided on artificial intelligence, with 34% viewing AI as beneficial for society while 36% consider it harmful. The study, which tracked AI adoption from February 2023 to August 2025, shows usage has more than doubled from 25% to 57%, driven primarily by younger adults aged 18-34 (83% usage) compared to just 34% among those 55 and older.
While chatbots dominate usage at 73%, they also generate the highest concerns, with 73% of Canadians believing AI chatbots should be prohibited from children's games and websites. The survey highlights significant privacy concerns (83%) and worries about societal dependence (83%), with Canadians primarily holding AI companies responsible for potential harms (57%) rather than users (18%) or government (11%). Notably, 46% of users worry that frequent AI use might make them "intellectually lazy or lead to a decline in cognitive skills."
Further, [Renjie] Butalid, of Montreal AI Ethics Institute, notes that the survey findings on privacy (83% concerned) and job displacement (78% see AI as a threat to human jobs), reveal where government leadership is most needed. “These aren’t just individual consumer choices, they’re systemic issues that require coordinated policy responses. When Canadians say they want companies to regulate AI systems more, they’re really asking government to set the rules of the game. Privacy protection and workforce transition support are exactly the kind of challenges where government tone-setting through clear standards, regulations, and investment priorities can make the difference between AI serving Canadian interests or leaving communities behind.”
These insights highlight the pressing need for comprehensive governance frameworks that address both the technical and societal dimensions of AI deployment, particularly as Canada continues to develop its regulatory approach in this rapidly evolving landscape.
To dive deeper, read the full article here.
❤️ Support Our Work
Help us keep The AI Ethics Brief free and accessible for everyone by becoming a paid subscriber on Substack or making a donation at montrealethics.ai/donate. Your support sustains our mission of democratizing AI ethics literacy and honours Abhishek Gupta’s legacy.
For corporate partnerships or larger contributions, please contact us at support@montrealethics.ai
✅ Take Action:
Have an article, research paper, or news item we should feature? Leave us a comment below — we’d love to hear from you!




Please consider to participate in the announced event in Europe.
Safe houses