The AI Ethics Brief #183: Blurred Lines
From health data to root permissions, AI is redrawing the boundaries of access.
Welcome to The AI Ethics Brief, a bi-weekly publication by the Montreal AI Ethics Institute. We publish every other Tuesday at 10 AM ET. Follow MAIEI on Bluesky and LinkedIn.
📌 Editor’s Note

In this Edition (TL;DR)
LLMs in Healthcare: OpenAI and Anthropic launch healthcare-specific modes, raising questions about the blurred line between consumer and patient data, and who ultimately controls the resulting data flows.
OpenClaw and Agentic AI: A viral open-source assistant validates concerns about AI agents that require root-level permissions across your apps, browsers, and cloud servers.
Tech Futures (New Column): Our inaugural collaboration with RAIN examines how Big Tech positions itself against science and floods education with industry-produced AI content.
AI Policy Corner: Our collaboration with GRAIL at Purdue University examines the Executive Order aiming to standardize US AI policy through litigation task forces, funding restrictions, and federal preemption of state laws.
Recess: In this instalment from Encode Canada, Emma Edney explores AI’s role in Canadian law schools and proposes Bar Exam reforms to protect the “people skills” AI can’t replicate.
What connects these stories:
This edition traces a common thread: the quiet expansion of AI into spaces previously constrained by regulation, professional norms, or simple technical separation. Whether it's LLMs ingesting both fitness tracker data and medical records, agentic assistants demanding root-level access across your digital life, or Big Tech flooding education with self-serving training content, the pattern is consistent. Boundaries are blurring, and the frameworks meant to govern these spaces haven't caught up. The question running through each piece is the same: as AI systems gain access to more of our lives, who decides where the lines are drawn, and in whose interest?
🚨Recent Developments: What We’re Tracking
LLMs Are Being Formally Introduced Into Personal Healthcare
Days apart, both OpenAI and Anthropic released ChatGPT Health (January 7) and Claude for Healthcare (January 11) to their premium subscribers, tools specifically designed for users to ask healthcare-related questions. In both cases, users’ health data is not used to train their foundation models, while both have increased security around healthcare-specific conversations (such as separate memory and context windows). Overall, these modes aim to help users better understand their health in plain language by allowing them to share personal medical records and connect sensitive health data from medical records, fitness trackers, and connected apps.
📌 Our Position
LLM users have been using these systems for medical advice long before healthcare-specific modes existed. A 2025 study of 2,000 Americans showed that 39% trusted AI medical advice. This was concerning, especially as disclaimers warning users not to trust LLM-derived medical advice became less prominent over time. Establishing a healthcare-specific mode with enhanced data security is a step in the right direction, helping patients make sense of complex medical records and saving medical practitioners time (e.g., by expediting authorization requests).
Nevertheless, in Brief #181, our Recess piece by Kennedy O’Neil examined the ethical grey areas of smartwatches, showing that current Canadian privacy legislation does not fully address smartwatch-related privacy breaches. With millions more US users now able to connect their health data to LLMs, these concerns are supercharged.
This raises a critical distinction: consumer health data (from fitness trackers and wellness apps) is not subject to the same protections as patient data under laws like HIPAA. When LLM healthcare modes blur these categories by ingesting both, it remains unclear which regulatory framework applies. It is also unclear whether users are meaningfully consenting when they connect these apps, or whether consent is buried in the terms of service.
These regulatory ambiguities take on added weight when we consider who ultimately controls this data. In Chapter 17 of SAIER Vol. 7, Tariq Khan examines Palantir's private sector creep into the UK healthcare system. This move imports the concerns associated with Palantir as a company (see also Brief #182: The Surveillance Infrastructure Is Now Operational) while also blurring the line between "public infrastructure and private strategy," eroding trust between governments and the public. Ceding further control to private institutions, rather than individual medical institutions, reduces the decision-making authority of both users and medical professionals over patient data. For this reason, Zoya Yasmine, in Chapter 8 of SAIER Vol. 7, highlights the role medical institutions can play in adopting a more critical approach to AI solutions in healthcare.
Consequently, if LLMs in healthcare are not intended to replace human-led care (as both OpenAI and Anthropic emphasize), patient and medical practitioner involvement must be meaningful. Yasmine notes how practitioners are often left frustrated by “opaque reasoning and insufficient control.” Whether these tools genuinely empower patients and practitioners or simply expand the private sector’s foothold in healthcare remains an open question.
OpenClaw and the Expanding Reach of Agentic AI
Over the past week, a viral open-source AI assistant called OpenClaw (formerly Clawdbot, briefly Moltbot) captured the tech world’s attention. The tool promises something different from typical chatbots: an AI that lives inside your existing messaging apps and can actually do things on your behalf, from scheduling tasks to organizing files to triaging your inbox. For the full story of its chaotic debut, including trademark disputes, crypto scammers, and a lobster mascot that briefly sprouted a human face, see CNET's coverage.
📌 Our Position
OpenClaw illustrates the tensions we explored in Brief #169: Are we trading away human agency in the name of convenience?
Signal President Meredith Whittaker has previously described agentic AI as “putting your brain in a jar.” Consider a simple task: an AI agent looks up concerts, books tickets, adds the event to your calendar, and messages your friends. To do this, it needs access to your browser, credit card, calendar, and messaging apps, with “something that looks like root permission, accessing every single one of those databases, probably in the clear.” The result, Whitaker has warned, is a “magic genie bot” that breaks the “blood-brain barrier” between apps, operating systems, and remote servers, muddying data across services in ways that fundamentally undermine privacy and security.
Whittaker’s warning was prescient. OpenClaw is precisely this kind of magic genie bot, and its turbulent debut validated her concerns in real time. The security concerns are not hypothetical: within days of going viral, researchers spotted publicly accessible OpenClaw deployments with little or no authentication, exposing API keys, chat logs, and system access.
The pattern should look familiar. Just as LLM healthcare modes blur the line between consumer data and protected patient data, OpenClaw collapses distinctions between apps, platforms, and permission structures. A single agent might access your calendar, messages, email, and files under one set of permissions, raising the same question: which regulatory framework governs these data flows?
Each interaction with an AI assistant generates behavioural data. Each API permission granted expands its reach. OpenClaw’s turbulent first week offers a preview of what happens when these systems scale without adequate safeguards.
Preserving meaningful human agency will require us to ask not just “what can this tool do for me?” but “what am I granting access to, and who else might benefit from that access?” The little lobster that moulted and kept going is charming. The questions it raises about autonomy, consent, and control are not.
Did we miss anything? Please share your thoughts with the MAIEI community:
💭 Insights & Perspectives:
Tech Futures: Co-opting Research and Education
We’re excited to introduce Tech Futures, a new column developed in collaboration with the Responsible Artificial Intelligence Network (RAIN). The series challenges mainstream AI narratives by centering rigorous research over industry claims. In this inaugural instalment, RAIN’s Ismael Kherroubi Garcia examines how Big Tech has positioned itself against science. From Nvidia's CEO dismissing "PhDs" raising concerns, to AI companies claiming their products operate at "PhD level" or surpass Nobel Prize winners, the framing is deliberate: undermine the credibility of independent research while flooding education with industry-produced AI training content. The UK government's AI Skills Hub, where 60% of free content comes from tech companies, is a case in point. As the OECD recently warned, these tools risk fostering “metacognitive laziness” rather than deep learning. What stands in the way of Big Tech’s financial gains is a well-informed consumer, and controlling the AI narrative is the path they’ve chosen.
To dive deeper, read the full article here.
AI Policy Corner: Executive Order: Ensuring a National Policy Framework for Artificial Intelligence
This edition of our AI Policy Corner, produced in partnership with the Governance and Responsible AI Lab (GRAIL) at Purdue University, spotlights the December 11 Executive Order: Ensuring a National Policy Framework for Artificial Intelligence, which aims to create a standardized national AI policy in the future. The piece highlights five strategies the Executive Order adopts to foster AI policy cohesion across states: creating a litigation task force, evaluating potential conflicts between state and national AI policies, tying funding eligibility to compliance, preempting certain state laws, and establishing a federal policy framework.
To dive deeper, read the full article here.
Recess: Is AI in Law School a Helpful Tool or a Hidden Trap?
This piece is part of our Recess series, featuring university students from Encode’s Canadian chapter at McGill University. The series aims to share insights from university students on current issues in AI ethics. In this piece, Emma Edney grapples with the ongoing entanglement between law and AI in Canada, focusing on how AI is used in law school and its limitations, and proposes reforms to the Canadian Bar Exam to protect the "people skills" AI can't replicate.
To dive deeper, read the full article here.
❤️ Support Our Work
Help us keep The AI Ethics Brief free and accessible for everyone by becoming a paid subscriber on Substack or making a donation at montrealethics.ai/donate. Your support sustains our mission of democratizing AI ethics literacy and honours Abhishek Gupta’s legacy.
For corporate partnerships or larger contributions, please contact us at support@montrealethics.ai
✅ Take Action:
Have an article, research paper, or news item we should feature? Leave us a comment below — we’d love to hear from you!




Brilliant breakdown of how AI is quietly dismantling seperation between regulated spaces. The OpenClaw example really nails how root-level access creates this panopticon where one compromised agent could expose everything at once. I saw similar isues when my companies started using AI sales tools that suddenly needed calendar, email, and CRM access all bundled together. The real danger isn't the tech itself but how consent gets buried in functionality.