The AI Ethics Brief #177: Community-led AI Governance
The SAIER, performative AI governance, and AI lab transparency
Welcome to The AI Ethics Brief, a bi-weekly publication by the Montreal AI Ethics Institute. We publish every other Tuesday at 10 AM ET. Follow MAIEI on Bluesky and LinkedIn.
📌 Editor’s Note
Suraj Rai & Digit / Better Images of AI / https://creativecommons.org/licenses/by/4.0/
In this Edition (TL;DR)
SAIER is back: After a three-year pause, our flagship State of AI Ethics Report (SAIER) returned to address the gap between AI ethics principles and actual practice with emphasis on community-led action. SAIER Volume 7 was published November 4, 2025, and includes 58 international contributors across 17 chapters, 48 essays, and 5 critical themes.
Foundational concepts and AI governance: We dig into Part I of SAIER Volume 7, which examines institutional interpretations and implementations of foundational concepts, including national responses to policies across the US and China, organizational AI governance mechanisms, and African initiatives for ethical capacity building.
OpenAI and Anthropic’s approaches to transparency: as examined in our AI Policy Corner with GRAIL at Purdue University, both companies place a strong emphasis on transparency within their AI governance frameworks. However, the two companies diverge in framing and method in their quest for their products to be deemed safe.
What connects these stories: There is an ongoing retreat from ethics to compliance. Some international, national and regional AI policies are becoming watered down, succumbing to the interests of Big Tech, with ethical practice often seen as the tradeoff for faster innovation. The same is true of the organizational implementation of AI governance mechanisms, preferring voluntary codes with little oversight to meaningful forms of public engagement and reflection.
Facing this trend, movements of resistance are emerging, with some seeking to emphasize the importance of ethics as a practice, as opposed to a continuous, rampant focus on building and releasing the latest in tech. As is the ethos of our SAIER, meaningful community-led attitudes towards the challenges different forms of AI present is not a “nice-to-have”; it is necessary. Without this approach, vulnerable communities are ignored, with their interests squashed by the few who shout loudest in the name of innovation.
There is no pre-determined outcome and path for AI development, nor is there only one unique methodological approach. Approaches differ and desired outcomes diverge, but intentional public dialogue and discussion prove more than deserving of being at the top of the methodological agenda of those developing AI systems.
🎉 SAIER is back
On November 4th, 2025, we launched SAIER Volume 7, a snapshot of the global AI ethics arena.
This Brief #177 is the first in a special series, each tackling a specific part of SAIER Volume 7. This series is to both celebrate the wonderful work of the report’s contributors, and to demonstrate how the report may be used to better navigate the changing AI ethics landscape.
Part I of SAIER Volume 7 has three chapters: on (i) global AI governance, (ii) various terms often intermingled when discussing AI ethics, and (iii) the organizational implementation of AI governance mechanisms, respectively. These aim to cover the patterns we are currently seeing in the AI governance space, providing viewpoints from a Canadian, Singaporean and an African perspective on how AI governance practices are evolving, and what this means for the space as a whole.
With perspectives from Canada, the US, Europe, Asia, and Africa, we acknowledge that not all voices could be captured. However, the ethos of SAIER Volume 7 is to present a grounded outlook on the current state of play in the world of AI ethics, which we hope proves useful to you in your daily work, helping you to navigate the dense and often obscure world of AI ethics.
🚨 Recent Developments on AI Ethics Foundations and Governance
Our first chapter in the SAIER highlights the current AI governance landscape, titled Global AI Governance at the Crossroads.
Jimmy Y. Huang (MAIEI and McGill University) and Renjie Butalid (Co-founder, MAIEI) provide a bird’s-eye view of the ever-changing global AI policy landscape. Three major blocs are put on the scene: those with the “superpower resources” to drive much of the conversation (China and the US), those seeking greater “AI sovereignty” through cooperation (African Union, Association of Southeast Asian Nations, Council of Europe and Gulf Cooperation Council), and “middle powers” trying to scale their own AI champions while protecting their national interests (e.g.: Canada, Australia, South Korea, and Singapore). The essay by Wan Sie Lee (Infocomm Media Development Authority of Singapore) in Chapter 2 further explains the changing nature of this picture, with the narrowing of “AI safety” at a policy level becoming clear during the Parisian “AI Action Summit” earlier this year.
Chapter 2 on Disentangling AI Safety, AI Alignment and AI Ethics further contextualizes AI governance developments. Renée Sieber (McGill University) distills some of the concepts, outlining various notions of “AI Safety” (which is increasingly institutionalized, as the “International Network of AI Safety Institutes” demonstrates), “AI alignment” and “AI ethics.” From Sieber’s analysis, we can highlight a “retreat” identified in the UK’s AI Safety Institute, which became the AI Security Institute early this year. To the above concepts, Fabio Tollon (University of Edinburgh) adds that of “responsible AI,” a contested term that may be either a research agenda, a governance model, an AI product or some ecosystem of stakeholders, practices and interests.
This brings us onto chapter 3, where Ismael Kherroubi Garcia (Kairoi & MAIEI), Joahna Kuiper (HiirAI) and Shi Kang’ethe (AIVERSE) write on Implementing AI Ethics in Organizations. While Tollon situated much of “responsible AI” in industry, Kherroubi Garcia draws on Tollon’s recent work to re-emphasize the “retreat” Sieber had hinted at; this time, not at a national level but an organizational one, as there is an ongoing shift from “responsible AI” to compliance. This is consistent with Kuiper’s analysis, which highlights the pressure organizations face to monetize AI products and respond to legal frameworks; most notably, the EU’s AI Act.
While industry in the West follows its usual standards, Kang’ethe provides some hope for the responsible AI ecosystem from Africa. Kang’ethe describes AI bootcamps, leadership programs, community-informed research, webinars and hackathons that place dialogue and participants’ reflections at the centre. The goal of such initiatives is not to develop the capacity to build AI tools but to critically evaluate them. Despite following a different approach, further policy fragmentation looms as African nations strive to lead the way, including Kenya, Zambia and Morocco.
📌 Our Position
October 31st saw the end of Canada’s “30-day national sprint to shape a renewed AI strategy.” The sprint was announced in September alongside an AI Strategy Task Force, which we criticized for being too industry-focused in Brief #175. But we can now build on some of the responses that have been made openly accessible to further this perspective.
Drawing on various responses to the consultation (with over 120 signatories from civil society, human rights and civil liberty organizations), the open letter coordinated by Citizen Lab researchers and director Ron Deibert condemns the biased consultation process involved in the sprint, serving to safeguard the interests of industry while limiting meaningful public engagement through its short window for responses.
Further critique ensued: 70 organizations (including MAIEI) called for a deeper and more meaningful engagement with minoritized communities; while Kairoi called on Canada to utilize its influence in the West to act more authentically in light of its Indigenous heritage.
Notwithstanding diverse organizations scrambling to provide meaningful input within the tight timeframe, the 2025 government budget, published November 4th, made very clear that AI would be further adopted across government departments to improve “operational efficiencies.” Yet, without proper consultation with the departments in question, the implementation of AI will result in being performative and forced. We cannot ignore that the three pillars of the Pan-Canadian AI Strategy (commercialization, standards, and talent and research) established in 2017 formed the core of the consultation. Yet, it seems the recent emphasis has landed on commercialization.
Consequently, Brief #175 was titled “When Consultation Becomes Theatre.” The decisions already in place, and the unnecessarily short turnaround reinforce a notion of public consultation regarding AI in Canada being a question of performativity: it’s not about getting the nation’s input on some of the most important tech policies of the decade, but appearing to do so.
The common thread between all these critiques is clear: the Canadian Government’s consultation attempt became a series of platitudes: performative and devoid of meaningful impact.
Nevertheless, an attempt to attain more responses from an otherwise excluded group (artists) was instigated by the Toronto branch of ACTRA (Alliance of Canadian Cinema, Television and Radio Artists). ACTRA Toronto put together suggested responses for their beneficiaries to remind the government of the importance of consent, compensation, and control when it comes to AI in the arts. This form of consultation is what the sprint lacked: on-the-ground input from members of the public facing genuine threats to the core of their work.
It is this kind of analysis that will better serve the general public, with its absence spelling the start of a vacuous and misaligned attempt at implementation that leads to those affected most being ignored.
Did we miss anything? Let us know in the comments below.
💭 Insights & Perspectives:
AI Policy Corner: Transparency in AI Lab Governance: Comparing OpenAI and Anthropic’s Approaches
This article is part of our AI Policy Corner series, a collaboration between the Montreal AI Ethics Institute (MAIEI) and the Governance and Responsible AI Lab (GRAIL) at Purdue University, which compares OpenAI’s Preparedness Framework (Version 2. Last updated: 15th April, 2025) and Anthropic’s Responsible Scaling Policy (Version 2.2. Effective May 14, 2025), highlighting how each lab approaches transparency as part of their governance processes, and how this goes on to impact the level of accountability the company is willing to commit to.
The difference in approaches between the two companies demonstrates how value-laden the AI space truly is, raising critical questions about the ambitions and strategy behind the companies’ commitments to transparency. As we have seen with the Canadian Government’s performative sprint, the article highlights how it remains to be seen how formalised the external dialogue that both companies promise becomes and, above all, how genuine an impact this dialogue will have on the decisions they make.
To dive deeper, read the full article here.
❤️ Support Our Work
Help us keep The AI Ethics Brief free and accessible for everyone by becoming a paid subscriber on Substack or making a donation at montrealethics.ai/donate. Your support sustains our mission of democratizing AI ethics literacy and honours Abhishek Gupta’s legacy.
For corporate partnerships or larger contributions, please contact us at support@montrealethics.ai
✅ Take Action:
Have an article, research paper, or news item we should feature? Leave us a comment below — we’d love to hear from you!



