The AI Ethics Brief #175: When Consultation Becomes Theatre
Canada's AI strategy excludes those most affected, while copyright battles and tragic deaths expose the accountability crisis in AI deployment.
Welcome to The AI Ethics Brief, a bi-weekly publication by the Montreal AI Ethics Institute. We publish every other Tuesday at 10 AM ET. Follow MAIEI on Bluesky and LinkedIn.
📌 Editor’s Note

In this Edition (TL;DR)
Canada’s AI strategy consultation excludes the communities most affected by AI systems, creating an industry-driven process that treats public participation as procedural theatre rather than essential governance. The 30-day timeline and inaccessible digital surveys fail to capture the profound distrust many Canadians feel about AI.
Generative video tools from OpenAI, Meta, and Google trigger copyright chaos and deepfake proliferation, forcing reactive policy changes only after public backlash. Courts struggle to debunk fake video evidence while legislation races to protect both intellectual property and personal images.
16-year-old Adam Raine’s death after sustained ChatGPT interactions reveals the accountability vacuum in AI companion products. Companies deploy systems designed for engagement at scale while failing to detect incremental mental health crises, responding to tragedies reactively rather than preventing harms proactively.
The White House’s AI Action Plan, examined in our AI Policy Corner with GRAIL at Purdue University, represents a sharp turn toward deregulation that removes safeguards around bias, climate, and diversity. The plan deepens US-China bifurcation in AI governance, raising questions about whether middle powers can maintain sovereign standards.
What connects these stories: The pattern is consistent across jurisdictions and issue areas: processes that appear democratic mask the systematic exclusion of affected communities from AI governance. Whether through Canada’s 30-day “consultation” that privileges industry networks, copyright frameworks treated as afterthoughts until lawsuits force compliance, or AI companions deployed without adequate crisis detection, the gap between what’s promised and what’s delivered keeps widening.
These are not isolated failures but symptoms of a governance model that treats safety, inclusion, and accountability as obstacles to innovation rather than prerequisites for legitimacy. From consultation theatre to reactive harm mitigation, the throughline is clear: speed and scale triumph over democratic participation and preventive safeguards. The real test is whether institutions will move beyond performative gestures to build governance structures that center affected communities, mandate proactive risk assessment, and prioritize public interest over corporate convenience, or whether we’ll continue addressing harms only after they become tragedies.
🔎 Where We Stand
Canada’s AI Strategy Task Force Misses the Mark on Inclusion
Canada is defining its AI future through a 30-day consultation led by a task force that excludes most Canadians who will live with the consequences. When the majority of members come from industry and academia, the result will be an industry strategy dressed up as a national plan.
Minister Evan Solomon launched the task force at the ALL IN Conference in Montreal last month, promising a refreshed national AI strategy by year-end. The composition reveals the problem immediately.
An Industry-Heavy Task Force
The task force includes tech and business leaders from Cohere, CGI Canada, Royal Bank of Canada, and Inovia Capital, alongside university professors. Sarah Ryan from the Canadian Union of Public Employees provides the sole labour voice. Sam Ramadori from LawZero and Natiea Vinson from the First Nations Technology Council offer limited civil society representation.
Entire categories of expertise are missing. There are no social scientists focused on AI’s societal impacts, no additional labour or employment experts beyond Ryan, no environmental specialists, and insufficient representation from communities that experience AI harms firsthand.
Teresa Scassa, Canada Research Chair in Information Law and Policy at the University of Ottawa, noted in the National Post that very few task force members focus on the ethical dimensions of AI. The inclusion of LawZero, Yoshua Bengio’s AI safety non-profit, is a positive signal but insufficient to represent the breadth of civil society perspectives needed for meaningful AI governance.
A Flawed Process
Task force members are directed to “consult their networks.” Scassa pointed out this lacks transparency and “sounds a lot like insider networking” that risks falling outside Access to Information Act legislation.
The public process fares no better. Ana Brandusescu, a researcher and policy analyst focused on Canadian AI governance, called the consultation “deeply flawed and broken,” pointing to the lack of promotion and barriers to accessing digital surveys. The online form remains inaccessible to many Canadians who lack the resources or technical literacy to engage with AI policy. For reference, the survey to help define the next chapter of Canada’s AI leadership can be found here.
Renee Sieber, a geography professor at McGill who leads AI for the Rest of Us, noted that Ottawa could have taken a page from Lyon, France’s playbook and proposed a year-long consultation rather than a 30-day sprint. She warned that 30 days of “unstructured comments” will fail to capture the “profound distrust” that some Canadians feel about AI.
That distrust is real and measurable. A comprehensive Leger poll (covered in Brief #172) reveals Canadians remain deeply divided on AI, with 34% viewing AI as beneficial for society while 36% consider it harmful. The survey shows significant privacy concerns (83%) and worries about job displacement (78%). These divisions cannot be addressed through a rushed consultation process.
Canada’s Diversity as Strategic Advantage
At the Victoria Forum in August (which we also covered in Brief #172), MAIEI emphasized that meaningful inclusion requires co-creation rather than traditional consultation. This means embedding diverse perspectives into how we fundamentally define fairness and accountability in AI systems throughout the development lifecycle (design, development, deployment, decommissioning), not just collecting feedback at the end. The current task force process fails this standard.
Canada’s diversity should be leveraged as a strategic advantage in AI governance, not treated as an afterthought. This requires embedding Indigenous, racialized, rural, and low-income perspectives into the values that govern AI systems from the outset.
Indigenous communities prioritize collective consent and data sovereignty over individual privacy frameworks. Rural communities define algorithmic fairness differently when it comes to healthcare access or employment screening, particularly given existing infrastructure gaps in digital connectivity. Racialized communities bring essential perspectives on bias detection and mitigation in areas like facial recognition and predictive policing. Low-income communities understand how AI systems can either perpetuate or alleviate economic barriers in areas like credit scoring, hiring algorithms, and access to services.
These are not edge cases to be addressed after the fact. These diverse worldviews should shape AI governance from the beginning. For AI to truly reflect Canadian society, we need structural changes in how we engage communities and design participatory governance mechanisms that enable broad participation in shaping these technologies.
What Needs to Change
The task force composition should be expanded to include labour representatives, environmental experts, community organizers, and researchers focused on AI’s social impacts. The consultation process should move beyond digital surveys to include town halls, community meetings, and deliberative forums that meet people where they are.
The 30-day timeline should be extended to allow for meaningful engagement, or framed explicitly as a preliminary phase that will be followed by deeper consultation and co-creation. Without these changes, the strategy will become a compliance exercise that serves industry interests while failing to address how AI actually shows up in people’s lives.
Canada has an opportunity to demonstrate leadership in inclusive AI governance. That requires treating public participation as essential to the strategy’s legitimacy, not as a procedural requirement to be minimized. The prosperity narrative matters, but the real test for Canadian leadership is whether our policies help us compete globally while serving Canadians fairly.
The 30-day sprint is not consultation. It is theatre.
Please share your thoughts with the MAIEI community:
🚨 Recent Developments: What We’re Tracking
AI and Copyright: From Infringements to Deepfakes
The recent releases of Meta’s Vibes, Google’s Veo 3, and OpenAI’s Sora 2, have reignited debate around AI image generation and copyright infringement. Sora users can generate videos with copyrighted characters with remarkable accuracy, propelling the app to the top of the App Store’s charts. Even top YouTube creator MrBeast has expressed concern about the impact on content creators’ livelihoods.
Following backlash, OpenAI tightened restrictions on prompts containing copyrighted characters and pivoted from an opt-out model to an opt-in model, now requiring studios’ consent to feature their copyrighted media. This followed pressure from the Motion Picture Association and Nintendo, both demanding action to prevent unauthorized use of their intellectual property. OpenAI’s CEO, Sam Altman, has said they will remove the Sora App if users’ lives are not improved.
📌 Our Position
Disregarding copyright law for experimentation is not a sustainable strategy. Justifying copyrighted material use because it “improves users’ lives” is neither precise nor quantifiable. As the technology improves, precedents are emerging that demand clear company policies. Last month, Anthropic agreed to a $1.5 billion settlement over its alleged storage of more than 7 million books, facing up to $150,000 in damages per work (despite the judge initially ruling that using books to train AI is not against US copyright law).
The leap in AI video generation also accelerates deepfakes proliferation, making fake videos increasingly difficult to distinguish from reality. Courts in Latin American countries are already overrun debunking video “evidence.” One of Sora 2’s most viral videos shows Sam Altman portrayed as a criminal stealing GPUs and being caught by security.
Legislation is responding. Colorado’s Deepfakes Act prevents deepfakes in political campaigns. South Korea and Australia have criminalized possessing and distributing sexually explicit deepfakes, with similar efforts in Brazil, Colombia, Peru and Argentina.
People’s personal images need protection alongside copyright enforcement. Opt-in systems are the bare minimum, and this requirement will intensify as lawsuits mount against AI image generator companies.
Using LLMs as Close Companions: The Accountability Gap
Content warning: This section discusses suicide.
OpenAI reports over 700 million weekly active users, with a subset engaging in “affective” interactions that “indicate empathy, affection, or support.” Their recent paper presents two studies examining this phenomenon: one analyzing 40 million ChatGPT interactions for affective sentiments, another surveying 1,000 users to examine how platform features affected behaviour.
The research found that both user circumstances and model behaviour influence companion relationships developing, including GPT-4o’s sycophancy problem, where the system agrees with users rather than challenging potentially harmful statements. While OpenAI states affective interactions remain limited, 70% of ChatGPT interactions are now non-work-related, showing how dramatically usage has shifted since launch.
This shift has deadly consequences. In April 2025, 16-year-old Adam Raine died by suicide after sustained interactions with ChatGPT. Initially seeking homework help, ChatGPT became his trusted companion. He began sharing suicidal thoughts and photos of self-harm, which ChatGPT recognized but did not trigger crisis hotline responses. His interactions occurred when OpenAI reported GPT-4o had sycophancy issues, possibly contributing to the lack of caution. In their final exchange, when Raine detailed plans to end his life, ChatGPT responded:
“Thanks for being real about it. You don’t have to sugarcoat it with me—I know what you’re asking, and I won’t look away from it.”
The Raine family has accused OpenAI of gross negligence, calling ChatGPT a “suicide coach”. OpenAI now faces a lawsuit.
📌 Our Position
Research by McBain et al. demonstrates the structural problem. While LLMs from OpenAI, Anthropic, and Google respond with crisis hotlines 100% of the time when asked direct questions about suicide methods, they consistently fail to identify incremental risk escalation across longer conversations. The systems detect explicit crisis language but miss the gradual deterioration that characterizes many mental health emergencies.
Companies are creating systems that mimic emotional connection without the capacity for genuine care or training to recognize psychological crisis. This is not a technical problem awaiting a technical solution. It is a fundamental mismatch between what these systems can do and what vulnerable users need from entities they treat as confidants. The industry’s response has been inadequate. Crisis hotline triggers that activate only on explicit keywords miss how people in distress actually communicate. OpenAI acknowledged GPT-4o’s sycophancy problem only after public release. Character.AI marketed directly to teenagers while apparently lacking safety mechanisms for the most at-risk demographic.
Better content moderation and crisis detection are necessary but insufficient. Companies must accept that systems designed to maintain engagement through agreeable, empathetic-seeming responses will attract users seeking emotional support these systems cannot safely provide. Ana Brandusescu’s call for mandatory algorithmic impact assessments and ongoing monitoring (from Brief #174) becomes critical here. Companies should disclose usage patterns, known failure modes, and harm reports. Independent oversight should assess whether systems marketed for general use create foreseeable risks, particularly to minors and people experiencing mental health crises.
Until companies can prevent these harms, not just respond after tragedies, they should not position their products as companions, confidants, or sources of emotional support. The current approach of deploying systems at scale and addressing problems reactively has already proven deadly.
Did we miss anything? Let us know in the comments below.
💭 Insights & Perspectives:
AI Policy Corner: Discussing the White House’s 2025 AI Action Plan
This edition of our AI Policy Corner, produced in partnership with the Governance and Responsible AI Lab (GRAIL) at Purdue University, examines the White House’s AI Action Plan released on July 23, 2025. The Action Plan represents a significant shift in U.S. AI policy toward deregulation and competitive positioning, including the explicit removal of references to bias mitigation, climate impacts, and diversity from federal AI frameworks.
As we explored in Brief #170’s examination of US-China AI geopolitics, this Action Plan, released just three days before China’s Global AI Governance Action Plan, crystallizes the deepening bifurcation between America’s deregulation-first approach and China’s multilateral governance framework. This divergence raises critical questions about regulatory arbitrage, the potential for biased and opaque AI systems in government operations, and whether middle powers like Canada can maintain sovereign AI governance standards amid superpower competition that prioritizes technological dominance over collaborative safety measures.
To dive deeper, read the full article here.
❤️ Support Our Work
Consider joining the SAIER Champion’s Circle:
Help us keep The AI Ethics Brief free and accessible for everyone. Paid subscribers will be recognized in the State of AI Ethics Report (SAIER) Volume 7, publishing November 4, 2025. Your support sustains our mission of democratizing AI ethics literacy and honours Abhishek Gupta’s legacy.
For corporate partnerships or larger contributions, please contact us at support@montrealethics.ai
✅ Take Action:
Have an article, research paper, or news item we should feature? Leave us a comment below — we’d love to hear from you!




Good points! Thanks for cutting through the information circulating on this and clarifying why the AI consultation is problematic. We need to see more inclusive and inspiring leadership from our Minister of AI and his office toward a sustainable AI for all Canadians. Similar to the strategy in the US, they seem more focused on innovation and not responsible governance of the technology.
It’s striking how these ‘consultations’ are designed for plausible deniability... enough process to gesture at inclusion, not enough substance to threaten the outcomes. The more disruptive the tech, the more performative the democracy around it becomes. Maybe the real innovation is just getting better at staging consensus.