The AI Ethics Brief #189: The Futures We Make Room For
On participatory tech futures, contested language, and what comes next for AI ethics.
Welcome to The AI Ethics Brief, a bi-weekly publication by the Montreal AI Ethics Institute. We publish every other Tuesday at 10 AM ET. Follow MAIEI on Bluesky and LinkedIn.
📌 Editor’s Note

In this Edition (TL;DR)
Crafting Participatory Tech Futures. MAIEI, RAIN, We and AI, and the San Diego Supercomputer Center will host a workshop at ACM FAccT 2026 in Montreal (June 25-28). The session asks participants to imagine, contest, and build AI futures together, rather than treating them as inevitable.
When the Work Outgrows the Name. Building on Brief #187 and Brief #188, we reflect on the language of “AI ethics,” “responsible AI,” “AI safety,” and “AI governance,” and ask whether these terms still make enough room for the future-facing, participatory work we believe is needed now.
Tech Futures. Our collaboration with RAIN continues with a closer look at the upcoming Crafting Participatory Tech Futures workshop. The piece argues that reimagining AI governance means starting not from what industry says is inevitable, but from the futures communities actually want to build.
AI Policy Corner. In partnership with GRAIL at Purdue University, we look at how U.S. cities are beginning to govern AI. A preliminary analysis of 24 local AI governance documents shows emerging patterns around internal government use, guiding principles, prohibited applications, public transparency, and intercity collaboration.
What Connects These Stories:
The last two Briefs asked two related questions: who gets to decide the future of AI, and what do the names we use make possible? This edition brings those questions together.
The ACM FAccT workshop begins from the premise that AI futures are not delivered to us. They are imagined, contested, and built through participation. The naming piece turns that same question inward, asking whether terms like “AI ethics” and “responsible AI” still hold the full scope of the work, or whether they sometimes narrow what institutions are willing to see. Tech Futures extends this into practice, framing reimagining as a way to turn AI governance away from top-down inevitability and toward community-defined futures. AI Policy Corner grounds the question locally, showing how cities are already developing policies, inventories, prohibitions, and collaborations to govern AI where its impacts are felt.
Taken together, these pieces are about moving from inevitability to agency.
The future of AI is not a single path set by companies, governments, or technical systems. It is shaped by the language we use, the institutions we build, the harms we refuse, the communities we listen to, and the governance choices we make now.
Sometimes, the work is naming what is happening. Sometimes, it is asking whether the name still gives us enough room to do the work.
Crafting Participatory Tech Futures
In March of this year, MAIEI joined three partners, the Responsible Artificial Intelligence Network, We and AI, and the San Diego Supercomputer Center, to submit a workshop proposal to the ACM Conference on Fairness, Accountability, and Transparency (FAccT) 2026. On April 15, we learned it had been accepted.
This year, ACM FAccT comes to Montreal for the first time in the conference’s history. It will run from June 25 to 28 at Le Centre Sheraton Montréal. For a community that has spent years asking who gets to shape AI, the location feels especially meaningful.
Montreal is where MAIEI’s earliest AI Ethics Meetups took shape. Those gatherings were built around a simple belief: when people are given time, shared context, and thoughtful prompts, public conversation about AI can become deeper, more accessible, and more useful. Participants came prepared, joined small groups, surfaced different perspectives, then reconvened to share what they had learned together.
Our accepted session, titled “Crafting Participatory Tech Futures,” carries that spirit forward.
Its premise is simple, but still contested: AI futures are not something delivered to us. They are something we can deliberate, contest, and build together.
The workshop invites participants to move through three phases: grounding themselves in futures-thinking frameworks, circulating through themed spaces that surface lived experience and competing values, and then working together to articulate pathways from the present toward more desirable technological futures.
For MAIEI, this is more than a conference acceptance. It is a return to a founding conviction: AI ethics is not only something to be studied by experts. It is something people can practice together.
The proposal is available to read in full, and there are a few ways to stay involved:
🗞️ Newsletter for those with Google Accounts
🗞️ Newsletter for those without Google Accounts
📖 Read the full proposal
🇨🇦 Attend ACM FAccT 2026 in Montreal
When the Work Outgrows the Name
In Brief #187: The Myth of Inevitability, we argued that the future of AI is not decided. Inevitability is not analysis. It is often a business strategy, one that asks the public to treat resistance as futile and adaptation as the only rational response.
In Brief #188: The Names We Give Things, we argued that names do real work. They shape what we think is happening, who we think is responsible, and what we think we consented to.
In this edition, those two threads meet: if the future is not inevitable, and if names shape what becomes imaginable, then the language of the field itself deserves scrutiny.
For MAIEI, this moment invites a broader reflection on the language we use to describe the work, and whether those terms still make enough room for the futures we are trying to help build.
What the Name Made Possible
“AI ethics” has carried a lot of work.
It helped name a field before many institutions were willing to admit there was one. It also helped establish MAIEI itself, giving shape to a community committed to making the societal impacts of AI more public, more accessible, and more open to collective scrutiny.
The term gave researchers, advocates, organizers, journalists, policymakers, and members of the public a way to talk about harms that were too often treated as technical side effects. It created room for questions about fairness, accountability, transparency, privacy, inclusion, labour, power, and democracy.
But names do not only create room. They also create boundaries.
Over time, “AI ethics” has sometimes become a place where institutions put certain concerns so they do not have to transform everything else. It can become a team, a checklist, a review process, a set of principles, a public statement, or a compliance layer. In the worst cases, it becomes a way to say the right things without changing who holds power, who benefits, or who bears the consequences.
What the Name Can No Longer Hold
This is not only a problem with “AI ethics.” “Responsible AI” is contested, too.
In The Responsible AI Ecosystem: A BRAID Landscape Study, Fabio Tollon and Shannon Vallor describe Responsible AI not as a single settled idea, but as a complex ecosystem shaped differently by industry, government, academia, and civil society. The same phrase can refer to research, governance, product design, or a broad community of stakeholders, each carrying different assumptions about what responsibility means and who is meant to bear it.
That ambiguity matters. A company, a government, and a civil society organization may all say “responsible AI” while asking very different questions about responsibility, power, and repair.
The first two contributions to Chapter 2 of The State of AI Ethics Report, Volume 7 (2025) make a similar point. In “The Institutions Behind the Concepts”, Renée Sieber notes that terms like “AI ethics,” “AI safety,” and “alignment” were abandoned, reshaped, or displaced by governments in favour of frames like “AI security” and “digital sovereignty.” Sieber also points to what remains missing from many of these debates: concentration of wealth and power, environmental costs, political economy, and local perspectives on AI governance.
Fabio Tollon, who also co-authored the BRAID report, makes a related argument in “The Contested Meanings of Responsible AI”: “Responsible AI” is not meaningless, but it is doing too many things at once. None of its meanings is enough on its own.
The same concern surfaced in a different register during Timnit Gebru and Karen Hao’s recent SXSW conversation. Gebru pointed out that very different technologies are often collapsed into the same category of “AI,” making grounded conversations harder. A medical transcription system, a recommender system, a chatbot, and a speculative “AGI” project get spoken about as if they belong to one coherent thing. That flattening makes it harder to distinguish between technologies worth pursuing, technologies worth constraining, and technologies that should not be built at all.
Gebru also pushed back on how her work is often bucketed into “AI ethics,” even when she understands much of it as AI safety. That distinction matters because the labels do not simply describe the work. They locate it. They decide which rooms it enters, which institutions claim it, which funders understand it, and which communities are invited to see it as theirs.
Karen Hao made a related point from a different angle. Rather than beginning with the assumption that more AI is the goal, she asked a more basic question: what are we trying to achieve, and which technologies, including nontechnical ones, would actually help us get there?
Taken together, these reflections point to a larger problem: the names we use do not simply describe the work. They organize it. They shape what counts as expertise, what counts as risk, what counts as repair, and who is asked to carry responsibility.
Questions We Are Sitting With
That is why this moment matters for MAIEI.
Not because ethics matters less, but because it should matter more.
Ethics was never meant to be a department, a checklist, a vertical, or a brand. It was meant to shape how technologies are imagined, funded, built, governed, resisted, refused, repaired, and sometimes not built at all.
That is why the language of futures feels important right now.
Futures are not only predictions. They are claims about what should be built, what should be protected, what should be refused, and who gets to decide. When a small number of institutions dominate the language of inevitability, the public is left reacting to futures already chosen for them. But when communities are invited to deliberate, contest, and build together, something else becomes possible.
In full transparency, these are not abstract questions for MAIEI. We are actively asking how to honour the work that “AI ethics” has made possible, while also considering whether the language of the field still gives enough room to the future-facing, participatory, and ecosystem-level work we believe is needed now.
What does it mean to move from AI ethics as a label to responsibility as a practice? What does it mean to stop treating ethics as a corrective after technology has already been imagined, and instead ask who gets to imagine the future in the first place?
We do not have a final answer yet.
If you have thoughts, critiques, cautions, or language that has helped you make sense of this shift, we would welcome hearing from you. Subscribers to The Brief can reply directly to this email, or reach us at support@montrealethics.ai.
And if you will be at ACM FAccT in Montreal this June, we hope you will join us for Crafting Participatory Tech Futures, where we will be asking some of these questions together.
Sometimes, the clarifying move is not to defend the name that brought the work this far, but to ask what futures it still makes possible.
Please share your thoughts with the MAIEI community:
💭 Insights & Perspectives:
Tech Futures: Crafting Participatory Tech Futures
This edition of our Tech Futures series, a collaboration with the Responsible Artificial Intelligence Network (RAIN), announces our upcoming Crafting Participatory Tech Futures workshop at ACM FAccT 2026 in Montreal. The piece examines how futures thinking can challenge narratives of AI inevitability, and why reimagining AI governance means starting from the futures communities actually want to build.
To dive deeper, read the full article here.
AI Policy Corner: How U.S. Cities Are Governing AI
This edition of our AI Policy Corner, produced in partnership with the Governance and Responsible AI Lab (GRAIL) at Purdue University, examines how U.S. cities are beginning to govern AI through local policy documents, public inventories, prohibited use cases, and intercity collaboration. The piece shows why local governments are becoming important frontline actors in shaping responsible AI governance.
To dive deeper, read the full article here.
❤️ Support Our Work
Help us keep The AI Ethics Brief free and accessible for everyone by becoming a paid subscriber on Substack or making a donation at montrealethics.ai/donate. Your support sustains our mission of democratizing AI ethics literacy and honours Abhishek Gupta’s legacy.
For corporate partnerships or larger contributions, please contact us at support@montrealethics.ai
✅ Take Action:
Have an article, research paper, or news item we should feature? Leave us a comment below — we’d love to hear from you!



