The AI Ethics Brief #186: Sovereign by Design. Accountable by Whose Standard?
Canada's sovereignty moment, Anthropic's values test, and the accountability frameworks we still need to build.
Welcome to The AI Ethics Brief, a bi-weekly publication by the Montreal AI Ethics Institute. We publish every other Tuesday at 10 AM ET. Follow MAIEI on Bluesky and LinkedIn.
📌 Editor’s Note

In this Edition (TL;DR)
Sovereign by Design. But Whose Design? Canada is building an AI sovereignty strategy at the infrastructure layer. Tumbler Ridge revealed we have no equivalent framework at the accountability layer. We examine what that gap costs, and what Canada should demand to close it.
When Principles Meet Power: The US Government and Anthropic’s ideological clash over a defense contract raises a question that will not go away: when AI companies sign contracts with governments, whose values govern the outcome?
Tech Futures: Our collaboration with RAIN examines how Big Tech is borrowing from the fossil fuels industry playbook, shaping research and owning fiction to control how AI is perceived.
AI Policy Corner: In partnership with GRAIL at Purdue University, we analyze Illinois Public Act 103-0804 and what it means when a government draws a clear line between algorithmic convenience and civil rights accountability.
What connects these stories:
Each piece in this edition asks the same underlying question: who decides? In Tumbler Ridge, a foreign company decided what constituted a credible threat to Canadian children. In the Anthropic dispute, a government tried to override the values a company had built into its own system. In Big Tech’s research funding and filmmaking investments, industry decided what stories get told about AI. In Illinois, the legislature decided that employers cannot outsource civil rights responsibility to an algorithm.
Across all four, the pattern is the same. Technological capability is advancing within governance frameworks that remain uneven, contested, and frequently outpaced. The question is not whether those frameworks will be built. It is who builds them, and in whose interest.
Sovereign by Design. But Whose Design?
Canada is having a sovereignty moment. The Munk School’s Sovereign by Design report, released this month, makes the case clearly: AI sovereignty means freedom from coercion, not digital isolationism. It maps our vulnerabilities layer by layer — cloud infrastructure, compute hardware, foundation models — and charts a path toward strategic autonomy. It is serious, rigorous work, and it arrives at exactly the right moment.
But while Ottawa debates juridical cloud sovereignty and the CUSMA review, a different kind of sovereignty failure played out in British Columbia. In June 2025, OpenAI banned Jesse Van Rootselaar’s ChatGPT account after it was flagged for troubling content, including scenarios of gun violence. The company determined internally that the activity did not meet the “higher threshold required” to refer it to law enforcement. Months later, on February 10, 2026, Van Rootselaar killed eight people at a Tumbler Ridge school. Maya Gebala, 12 years old, remains hospitalized.
When AI Minister Evan Solomon met with OpenAI officials in February, he left disappointed. “We expected [OpenAI] to have some concrete proposals,” he said. “We did not hear any substantial new safety protocols outside of some changes to their model.” Public Safety Minister Gary Anandasangaree was blunter: “Nothing substantial came out of it.”
This is what sovereignty failure actually looks like. Not a foreign government invoking the CLOUD Act. Not a hyperscaler denying service under geopolitical pressure. It is a California company making a unilateral judgment call about what constitutes a credible threat to Canadian children. And that judgment being wrong in ways that cannot be undone.
The Failure After the Flag
The uncomfortable truth is that OpenAI’s detection systems worked. They flagged the account. About a dozen employees debated whether to escalate. They decided not to. The failure was not in the tooling; it was in what came after, because there was no clear, mandated protocol for what “after” should look like. A dozen people deliberating with no resolution tells you the company had no defined framework for exactly this scenario. That is a governance gap, not a company failure. And it exists across the entire AI industry.
The Sovereign by Design report defines sovereignty across five dimensions: jurisdictional, operational, technological, societal, and economic. Tumbler Ridge sits uncomfortably across all of them. A foreign company's internal policy determined what happened to information about a planned mass casualty event on Canadian soil. No Canadian law, regulator, or institution had meaningful standing. No domestic alternative existed. No leverage to demand different terms. And the societal dimension the report warns about, AI systems shaping outcomes that Canadian institutions cannot see or counter, was not an abstraction. It was a school hallway in northern British Columbia.
Three Things Canada Should Demand
First, a mandatory reporting framework. We have mandatory reporting in healthcare and education because we decided as a society that the duty to warn outweighs privacy concerns in high-stakes contexts. When a doctor or teacher detects credible risk to a child, there is a protocol. Everyone knows what to do. AI platforms that hold intimate conversational data, the kind of data users share with no one else, should operate under a similar standard. Not blanket surveillance. A defined threshold, a tiered escalation pathway, human review, and oversight. Mental health crisis referrals for lower-risk signals. Law enforcement for imminent ones. That framework belongs to Canada to define, not to individual companies deciding ad hoc behind closed doors.
Second, transparent disclosure. Every AI company operating in Canada — not just OpenAI, but every platform collecting conversational data from Canadian users — must publish its detection and escalation policies. What triggers a flag, what triggers a ban, what triggers a referral. That should be the cost of doing business here. Right now, these are internal decisions made by private companies with no public accountability, applied to Canadian users under no Canadian standard. Tumbler Ridge revealed that gap. The next platform to face this decision should not make it in a vacuum.
Third, cross-platform coordination. A ban on one platform should trigger a coordinated review across others and a risk flag to relevant authorities. Van Rootselaar's account was banned from ChatGPT. YouTube and Roblox were also implicated in this case. Each platform responded independently, after the fact. That silo structure is a policy failure with consequences.
Minister Solomon said all options are on the table. That framing is welcome, but options without timelines are not accountability. The family of Maya Gebala filed suit in B.C. Supreme Court last week. A coroner’s inquest has been announced. These are the mechanisms of accountability after the fact. The harder work is building the mechanisms beforehand.
Canada’s AI sovereignty conversation is maturing. The Sovereign by Design report reflects serious thinking about what it means to govern AI in a fracturing international order. But sovereignty is not only about who controls the infrastructure. It is about who decides what happens when that infrastructure is used to plan harm against your people. Right now, that decision belongs to someone else.
Sovereign by design has to mean more than sovereign compute. It has to mean sovereign accountability, too.
When Principles Meet Power: The Question of Alignment
Sovereignty is not only about infrastructure or accountability frameworks. It is also about what happens when the values embedded in AI systems collide with the values of the governments trying to deploy them. Anthropic just found that out the hard way.
At a basic level, AI alignment focuses on various techniques and frameworks that aim to match AI technologies' actions with human desires. Typically, AI misalignment refers to when an AI output contradicts the desired outcome by a human, such as the AI-operated boat in the video game Coast Runners optimizing for points via crashing into walls instead of finishing the race as intended. However, in the case of the US Government and Anthropic, the misalignment was ideological.
The centre of this struggle is the US Government's desire to renegotiate the terms of a $200 million defense contract agreed with Anthropic in July 2025. Anthropic stipulated that it wanted nothing to do with autonomous weapons or mass domestic surveillance (it permits the use of Claude for foreign surveillance), and the US Government wanted to change the agreement to allow “all lawful uses” of autonomous weapons and surveillance. Anthropic CEO Dario Amodei released a statement fundamentally disagreeing with the US Government's demands, leading to Anthropic being designated a “supply chain risk.” Consequently, Anthropic is suing the US Government, with employees from OpenAI and Google writing letters of support.
Amid all of this, OpenAI entered into negotiations and secured the contract that Anthropic had been awarded. The rapid pace and timing of the deal raised serious internal concerns, with OpenAI CEO Sam Altman labelling his own company’s decision “opportunistic and sloppy” amid a surge in the uninstallation rate of the ChatGPT app. Altman claims that the terms of the agreement have been changed to prohibit mass domestic surveillance of Americans. However, the surrounding circumstances of the deal, and OpenAI employees’ support of Anthropic, have left the situation in a quagmire.
This situation makes one thing clear: when AI companies sign contracts with governments, they are not just making business decisions. They are making political ones. Anthropic drew a line — anchored in Claude’s Constitution, its internal framework governing how Claude should think and behave — and the US Government pushed back. That is not a procurement dispute. That is an ideological conflict, and it will not be the last one.
Anthropic’s willingness to walk away from a $200 million contract on principle is notable. The real test is whether they hold that line when the pressure compounds. When the losses are larger, the contracts more entangled, and the political cost of refusal higher.
As for OpenAI, the speed with which they moved to secure the contract speaks for itself. The gap between what AI companies say about safety and what they do when there is money on the table is a gap worth watching closely.
The deeper issue is this: private AI companies are now making decisions with citizen-wide consequences that governments, democratic institutions, and the public have no meaningful role in shaping. That is not a side effect of the AI industry. It is becoming its defining feature.
Please share your thoughts with the MAIEI community:
💭 Insights & Perspectives:
Tech Futures: The Fossil Fuels Playbook for Big Tech: Part I
This article is part of our Tech Futures series, a collaboration between the Montreal AI Ethics Institute (MAIEI) and the Responsible Artificial Intelligence Network (RAIN). In this special instalment, we consider two ways in which the Big Tech industry draws on the playbook of the fossil fuels industry: shaping research to serve its own interests and funding fiction to control how AI is perceived. The parallels are not coincidental. They are strategic.
To dive deeper, read the full article here.
AI Policy Corner: An Overview of Illinois Public Act 103-0804
This edition of our AI Policy Corner, produced in partnership with the Governance and Responsible AI Lab (GRAIL) at Purdue University, analyzes the Illinois Public Act 103-0804 and what it means for how AI can be used in employment decisions. The law is a concrete example of what it looks like when a government draws a clear line between algorithmic convenience and civil rights accountability.
To dive deeper, read the full article here.
❤️ Support Our Work
Help us keep The AI Ethics Brief free and accessible for everyone by becoming a paid subscriber on Substack or making a donation at montrealethics.ai/donate. Your support sustains our mission of democratizing AI ethics literacy and honours Abhishek Gupta’s legacy.
For corporate partnerships or larger contributions, please contact us at support@montrealethics.ai
✅ Take Action:
Have an article, research paper, or news item we should feature? Leave us a comment below — we’d love to hear from you!



