Special Edition: State of AI Ethics Report (SAIER) Volume 7
AI at the Crossroads: A Practitioner's Guide To Community-Centered Solutions

📌 Editor’s Note
After months of collaborative work with 58 contributors from around the world, we’re sharing this assessment of where AI governance stands in 2025, what’s working, what’s failing, and where urgent action is needed.
Editors: Renjie Butalid, Connor Wright, Ismael Kherroubi García
This volume is dedicated to the memory of Abhishek Gupta, MAIEI co-founder, whose spirit and vision continue to guide our work.
Access the report: MAIEI website | Zenodo
Why This Report Matters Now
2025 will be remembered as the year AI governance became intensely geopolitical. Within days of each other in July, the United States and China published competing visions for AI’s future. Meanwhile, the gap between those building AI systems and those impacted by them has grown wider and more structural.
A recent MIT report found that 95% of generative AI pilots are failing, not because the technology doesn’t work, but because implementation infrastructure, literacy, and participation mechanisms are broken.
If 2025 taught us anything, it’s that AI governance is fundamentally about power:
who has it, who controls it, and how communities respond when excluded from decisions that shape their lives.
SAIER Volume 7 documents this crossroads moment through practitioner perspectives, real-world case studies, and critical analysis across 17 chapters and five major themes.
A Note on What This Report Is
This is not another framework or productivity analysis. SAIER Volume 7 is a snapshot from our vantage point: perspectives from practitioners, researchers, advocates, and communities grappling with AI’s realities on the ground. It captures perspectives from Canada, the US, Europe, Asia, and Africa, though we recognize many voices and regions remain underrepresented.
This isn’t the definitive state of AI ethics. It’s a state of AI ethics, from where we sit, in 2025. We hope it generates more questions than it answers and encourages people to ask hard questions and be critical.
At the end of the day, AI isn’t just about machines, models, or infrastructure. It’s about people. What role do we want humanity to play in shaping a future increasingly mediated by algorithms?
What’s Inside
Opening Reflections
We open with reflections from Renjie Butalid (MAIEI Co-Founder), Marianna Ganapini (MAIEI Faculty Director), and Daniel Schiff (Co-Director of GRAIL, Purdue University), situating this volume at a critical inflection point for AI governance and MAIEI’s mission to democratize AI ethics discourse.
Part I: Foundations & Governance
Mapping the foundations of AI governance in 2025: competing global policies, conceptual underpinnings, and practical implementation mechanisms.
Key questions: What does AI sovereignty mean for middle powers navigating between US and China? How do we disentangle AI safety, alignment, and ethics? What does moving from principles to practice actually require?
Chapter 1: Global AI Governance at the Crossroads
Chapter 2: Disentangling AI Safety, AI Alignment and AI Ethics
Chapter 3: From Principles to Practice: Implementing AI Ethics in Organizations
Part II: Social Justice & Equity
How AI systems reproduce inequalities in housing, policing, education, elections, and surveillance. This section examines communities resisting these harms, the negative impacts of unchecked technological advances on democratic institutions, their limited effectiveness in promoting justice, their role in surveillance systems, and the environmental costs of AI infrastructures.
Key insights: AI-powered disinformation reshaping democratic participation | Why algorithmic justice efforts often fail to challenge state power | The hidden environmental costs of AI infrastructure
Chapters 4: Democracy and AI Disinformation
Chapters 5: Algorithmic Justice in Practice
Chapters 6: AI Surveillance, Privacy, and Human Rights
Chapters 7: Environmental Impact of AI
Part III: Sectoral Applications
What’s happening on the ground across healthcare, education, labour, and the arts, including deployments that went wrong and the first collective agreements protecting workers from AI exploitation.
What you’ll find: Lessons from healthcare AI failures | Universities grappling with generative AI | Labour justice in sectors like oil and gas | The first collective agreements protecting Canadian performers
Chapters 8: Healthcare AI: When Algorithms Meet Patient Care
Chapters 9: AI in Education: Tools, Policies, and Institutional Change
Chapters 10: AI and Labour Justice
Chapters 11: AI in Arts, Culture, and Media
Part IV: Emerging Technologies
Technologies at the frontier: military AI, autonomous weapons, AI agents operating with minimal human oversight, and democratic alternatives to corporate-controlled AI. This section explores the relationship between AI and the military, what the deployment of agentic AI systems means going forward, and how to champion community-driven AI applications.
Critical analysis: The new military-industrial complex | What it means when AI agents act with minimal intervention | Community-led approaches and open models as democratic infrastructure
Chapters 12: Military AI and Autonomous Weapons
Chapters 13: AI Agents and Agentic Systems
Chapters 14: Democratic AI: Community Control and Open Models
Part V: Collective Action
Stories of collective power: how communities are building AI literacy as civic competence, how nonprofits are organizing, and what governments are learning from public sector implementations. This section paints a grounded and hopeful picture of what collective action looks like in a world increasingly influenced by algorithms.
Highlights: Indigenous approaches to AI governance | How civil society organizations are shaping AI despite resource constraints | Lessons from public sector successes and failures
Chapters 15: AI Literacy: Building Civic Competence for Democratic AI
Chapters 16: Civil Society and AI: Nonprofits, Philanthropy, and Movement Building
Chapters 17: AI in Government: Public Sector Leadership and Implementation
By the Numbers
This volume brings together 58 international contributors across 17 chapters, 48 essays, and 5 critical themes.
Contributors from: University of Oxford, University of Cambridge, Infocomm Media Development Authority of Singapore, Governance and Responsible AI Lab at Purdue University, We and AI, Alliance of Canadian Cinema Television and Radio Artists, and more.
Please share your thoughts with the MAIEI community:
Please help us keep The AI Ethics Brief free and accessible for everyone by becoming a paid subscriber on Substack or making a donation at montrealethics.ai/donate. Your support sustains our mission of democratizing AI ethics literacy and honours Abhishek Gupta’s legacy.
For corporate partnerships or larger contributions, please contact us at support@montrealethics.ai






This report underscores that AI governance isn’t just about technology; it’s about power, equity, and community involvement, and it highlights how practitioners worldwide are grappling with these real-world challenges.
For more AI trends and practical insights, check out my Substack where I break down the latest in AI.