The AI Ethics Brief #180: Emerging Tech with Established Issues
We wish you Happy Holidays before diving into our State of AI Ethics Report Part IV: Emerging Technologies, while Sun-Gyoo Kang explores AI agents and their effect on e-commerce
Welcome to The AI Ethics Brief, a bi-weekly publication by the Montreal AI Ethics Institute. We publish every other Tuesday at 10 AM ET. Follow MAIEI on Bluesky and LinkedIn.
📌 Editor’s Note
Credit: Ada Jušić & Eleonora Lima (KCL) / Better Images of AI / CC BY 4.0
In this Edition (TL;DR)
Emerging Technologies: In Part IV of the State of AI Ethics Report (SAIER), we explore emerging applications and scientific practices shaping the AI ethics landscape.
Agentic AI systems: In an op-ed, Sun Gyoo Kang explains the potential for AI agents to transform e-commerce.
What connects these stories:
With new technologies come new considerations and unknowns. AI agents (and agentic AI in general) boast efficiency and productivity gains, while also ushering in new responsibility, culpability, and privacy considerations. AI in the military helps put soldiers in less vulnerable situations, but also lowers the barrier to entry for mass conflict as a result.
Nevertheless, examining these situations reveals common threads that run through all of them: the need for transparency and explainability. Being unable to explain AI agents’ courses of action risks frustration and powerlessness. In the military, these concerns are multiplied due to the context, scale, and intended usage. Consequently, Chapter 15 of the SAIER focuses on reclaiming the agency we lose in these contexts, discussing what openness means in the process of regaining the agency lost in these contexts.
No matter what innovation comes next, explainability and transparency only become more important, both for our own peace of mind and our agency.
🎅 Happy Holidays
2025 has been a year of growth, learning, and gratitude for the team at MAIEI. A year has now passed since our Co-Founder and Principal Researcher, Abhishek Gupta, passed away, whose wisdom and insight in these tumultuous times in the world of AI ethics are sorely missed. Nevertheless, we at MAIEI are proud to be continuing his legacy in our daily work, interactions, and discussions.
This year saw the return of the SAIER Volume 7, which we have been commenting on during the past few editions of the Brief. At its core, we focused on community-led insights, revealing rich and insightful commentary on the impact of AI on different sectors. With 58 external contributors over 17 chapters, we saw the true value of the work we do: our community. For that, we are truly thankful to all who have contributed, and to all those continuing to read the report.
Nevertheless, the AI ethics wheel keeps on turning. From major developments in LLMs and AI agents to chatbot companions and new AI regulation, there is never a dull moment in this space. These constant updates can cause confusion, stress, and worry, but they also spark necessary conversations about how best to manage and mitigate their consequences. It is these conversations that will ultimately determine how the future of AI pans out, and we are raring to go again in the New Year to help make sense of it.
From all of us at MAIEI, we wish you a very happy festive period, and we look forward to returning with more Briefs, more commentary, and more energy in 2026.
🎉 SAIER is back - Part IV: Emerging Technologies
On November 4th, 2025, we launched SAIER Volume 7, a snapshot of the global AI ethics arena, with perspectives from Canada, the US, Africa, Asia and Europe. Of course, not all voices could be captured, but SAIER Volume 7 presents a holistic understanding of the current state of play in the world of AI ethics. Part IV of the report serves as a primer on emerging applications of AI, ranging from the military to agentic AI, and to democratic AI practices.
Metaphors about AI—its “black-box” nature and the “AI arms race”—are informing very real geopolitical strategies that diminish any form of accountability. In the opening of Chapter 12 on military AI and autonomous weapons, Ayaz Syed (The Dais) paints a clear picture of how AI impacts military contexts. Firstly, national defence strategies emphasize the need to invest more in AI. Secondly, the now literal AI arms race means compressing decision windows, reducing complex risk evaluations to much quicker forms of automated calculus. Thirdly, opaque industrial and financial agreements are being broached behind closed doors, circumventing the robust governance mechanisms of more traditional procurement practices. Finally, global coalitions are having to form to protect human rights.
This final point is expanded on by Kirthi Jayakumar (civitatem resolutions), who introduces a variety of initiatives, including Stop Killer Robots, No Tech for Apartheid, No Tech for Tyrants, the Women’s International League for Peace and Freedom, Derechos Digitales and Global Partners Digital. In this context, AI ethics advocates sometimes appear in surprising places, such as the Catholic Pope, who stated last week, “there is ... a growing tendency among political and military leaders to shirk responsibility, as decisions about life and death are increasingly ‘delegated’ to machines.”
What happens when that “delegation” is taken to the fullest? What happens when AI tools not only perform specific tasks or inform specific decisions, but also run entire workflows and even interact with one another? Chapter 13 introduces “AI agents” and “agentic AI systems,” which Renjie Butalid (MAIEI) defines respectively as “narrowly scoped systems that automate specific tasks through tool integration and structured prompts,” and “systems that orchestrate multiple specialized agents, maintain persistent memory across sessions, decompose objectives into subtasks, and operate in self-coordinating ways.” The term itself seems confusing, ascribing what seems to be human agency to AI systems. Relatedly, Butalid cites evidence showing that “augmentation” is what workers want AI agents to do, not “replacement.” This provides a neat backdrop to the ongoing backlash against Microsoft who has been touting the launch of AI agents in Windows 11. Kathy Baxter (Salesforce) concludes Chapter 13 with a critique, highlighting the need for traditional AI model governance to shift towards the governance of AI agents. What’s more, Baxter posits an important question: “when is full autonomy appropriate, and when must humans remain in the loop?”
During this boom of AI innovations, many have seen a need to reclaim the agency that seems to be being handed over to machines. Chapter 15 is on this process of reclaiming, beginning with examples of cooperative governance models described by Jonathan van Geuns. As they explain, communities must be involved in decisions about AI when AI technologies rely on the water, power and land people need. David Atkinson (Georgetown University) follows with a critique of a common view on “democratizing” AI: that making models “open” is good enough. Atkinson points out a key flaw in this logic, as making extremely technical content available to the masses does not mean that the masses can make much sense of that content. Ismael Kherroubi Garcia (Kairoi, RAIN & MAIEI) provides a further critique of “openness,” as the term has been co-opted by Big Tech, whereby “open source” has come to mean sharing models’ weights, and not what makes a model open: the capacity to reuse, study, modify and share it.
💭 Insights & Perspectives:
Op-ed: Agentic AI Systems
In this op-ed, Sun Gyoo Kang (Law and Ethics in Tech) comments on the new phase of e-commerce marked by the introduction of AI agents. He highlights the reorientation of common marketing strategies from influencing human psychology to machine logic as more tasks get delegated to AI agents. This shift will bring significant changes and risks, with businesses facing new forms of liability concerns and cybersecurity vulnerabilities.
To dive deeper, read the full article here.
❤️ Support Our Work
Help us keep The AI Ethics Brief free and accessible for everyone by becoming a paid subscriber on Substack or making a donation at montrealethics.ai/donate. Your support sustains our mission of democratizing AI ethics literacy and honours Abhishek Gupta’s legacy.
For corporate partnerships or larger contributions, please contact us at support@montrealethics.ai
✅ Take Action:
Have an article, research paper, or news item we should feature? Leave us a comment below — we’d love to hear from you!





Strong framing on the AI agency delegation problem. The military AI section hits different when you realize how compressed decision windows get combined with automated systems, basically removing the human deliberation that's supposed to prevent bad calls. The critique of "democratizing AI" through openness is spot-on too; dumping model weights online without context or interpretability tools doesn't actually empower anyone except people already deep in the field. What's gnawing at me is how the governance frameworks keep lagging behind deployment velocity, we're basically regulating yesterday's tech while tommorrow's is already shipping.