The AI Ethics Brief #179: Seen, But Not Heard: AI's Impact On Labour
We further explore our State of AI Ethics Report with Part III: Sectoral Applications while highlighting the latest addition to our AI Policy Corner on Brazil's newest AI-related bill
Welcome to The AI Ethics Brief, a bi-weekly publication by the Montreal AI Ethics Institute. We publish every other Tuesday at 10 AM ET. Follow MAIEI on Bluesky and LinkedIn.
📌 Editor’s Note
Credit: Yasmin Dwiputri & Data Hazards Project / Better Images of AI / CC BY 4.0
In this Edition (TL;DR)
Sectoral Applications: In Part III of the State of AI Ethics Report (SAIER), we explore some of the consequences of AI’s adoption across different sectors, including healthcare, education, oil and gas, and the arts.
Brazil’s AI regulation proposal: This week, our AI Policy Corner with GRAIL at Purdue University analyzes Brazil’s latest effort in the regulation of AI, establishing rules for “excessively risky” and “high-risk” AI systems.
What connects these stories:
There is no doubt that AI is being developed, deployed and adopted in ways that significantly shape industries and workplaces. However, what happens when those impacts are negative? In Brief #178, we spoke of pockets of resistance emerging before centralized forms of AI governance that don’t respond to the needs of the people. Yet, top-down policies remain a core tenet of modern-day society. Brazil’s AI bill, currently undergoing the machinations of government bureaucracy, is one such policy. However, the bill does make provisions relevant to some of the areas discussed in Part III of the SAIER.
Specifically, Brazil’s bill makes provisions for artists and creatives whose work is being digitally encoded into generative AI tools without their consent, without compensating them, and without giving them any means of control. Indeed, the bill received support just last month from the International Federation of Musicians’ (FIM) Latin American Regional Group, who are calling for other Latin American governments to follow suit. FIM, which represents musicians’ trade unions, guilds and associations, serves as a case study of grassroots, workers-first action.
Meanwhile, in Part III of the SAIER, we learn of the risks posed by AI to creatives, the need for collective licensing agreements, and the potential for collective action. As it turns out, even in healthcare, trade unions and professional bodies have had to take a stand against technosolutionist agendas to protect both patients and healthcare professionals. Thus, what connects this week’s stories is the people who are self-organizing to fight for an AI ecosystem that engages meaningfully with the experiences of workers across industries.
🎉 SAIER is back - Part III: Sectoral Applications
On November 4th, 2025, we launched SAIER Volume 7, a snapshot of the global AI ethics arena, with perspectives from Canada, the US, Africa, Asia and Europe. Of course, not all voices could be captured, but SAIER Volume 7 presents a grounded outlook on the current state of play in the world of AI ethics.
Healthcare and medical sciences are increasingly primed for AI applications. With personalized medicine coming into vogue, the potential for AI algorithms to parse enormous datasets and generate insights is becoming invaluable. One way to organize such data about individual patients is through “digital twins”; computational representations of live data streams from, in our case, a patient. Rosa E. Martín-Peña (Leibniz University Hannover) explains in Chapter 8 the role digital twins can play in making healthcare more dynamic, not limited to routine appointments or emergency check-ups, but drawing on consistently up-to-date data. The cost of digital twins in healthcare then becomes the “quiet erosion of patient privacy,” to paraphrase Martín-Peña, who adds to this the potential to undermine patients’ testimonies before computational data.
Indeed, more critical approaches to AI are needed in healthcare settings, and Zoya Yasmine (University of Oxford) explains the role that trade unions have played in protecting patient safety and professional accountability in the UK. In particular, Yasmine introduces instances in which the British Medical Association and the Royal College of General Practitioners denounced “untried and untested” AI systems in medical settings.
Chapter 9 of the SAIER takes us away from AI in healthcare to AI in education, a key target for many AI chatbot and LLM developers. Estonia, Greece and Kazakhstan are among those signing agreements with OpenAI for education sector-wide implementations of ChatGPT, while Iceland and Rwanda have opted for Anthropic’s Claude. However, Tamas Makany and Ivy Seow (Singapore Management University) remind us that what really matters is the creation of learning environments that are welcoming to, well, learning opportunities. As they explain, “classrooms once served as backstage rehearsals where mistakes and questions were welcome. Now, ChatGPT fills this space of preparation, turning class participation into front-stage performances.”
Colleagues from Encode Canada add to this insight how different disciplines are protecting the space for critical thinking in schools before the deluge of LLMs. In their experience, language departments are stricter at earlier education levels; STEM departments have turned from take-home exams towards in-person labs; and some humanities and social science departments have opted for “contractual agreements” that bind the students to submitting original work.
Dr Elizabeth M. Adams (Minnesota Responsible AI Institute) begins Chapter 10 by recentring the question of AI adoption on employees. Adams argues that employees should be consulted earlier in the AI procurement process. Treated as “active partners in innovation,” employees can trust that AI-related decisions are well-founded, and organizations can avoid the staff disillusionment that follows when AI tools are brought in suddenly and as a potential replacement to human labour.
Ryan Burns (University of Washington Bothell) and Eliot Tretter (University of Calgary) then contextualize the impact of automation on employees in terms of its broader geographical and economic contexts. Focusing on the case of the oil and gas industry in Alberta, Canada, Burns and Tretter describe oil patches where employees are replaced by robots controlled from far-off cities, leading to “precarious, short-term, often freelance contracts.” Furthermore, disproportionate impacts may follow for First Nations communities, who comprise a large percentage of the service industry in the region.
Chapter 11 tackles AI in arts, culture and media. Katrina Ingram (Ethically Aligned AI) opens the chapter with the case of AI-generated voices for radio stations. “Thy” took to the airwaves of the Australian Radio Network (ARN) in 2024, but it wasn’t until April 2025 that it became clear the voice was AI-generated. “Thy” is an excellent example of how AI-generated content can go wrong: not only had ARN been deceitful about “Thy” not being human, but the tool was modelled on an unnamed Asian-Australian female; “Thy” was a “diversity hire” that wasn’t even human.
Anna Sikorski and Kent Sikstrom (Alliance of Canadian Cinema, Television and Radio Artists; ACTRA) follow with an essay on solutions to the issue of AI-generated assets across the industry. Consent, compensation and control constitute the core principles of the collective licensing agreements for which they lobby. The first two sections of Chapter 11 provide a robust grounding to understand the potential for strike action by actors in the UK, as their union Equity canvasses their members over AI concerns.
Amanda Silvera brings the chapter to a close with a name for the ongoing automation of actors’ work. Much like when the Little Mermaid, unaware of the full consequences of her decision, sacrificed her voice to live in a new world, Silvera calls the “non-consensual pact where human identity is extracted for technological progress, rarely with true understanding of the cost” the “Ursula Exchange.” A victim of the Ursula Exchange, Silvera introduces us to SOBIRTECH, “a framework designed to verify, license, and monitor the use of biometric identifiers like voice and likeness.”
💭 Insights & Perspectives:
AI Policy Corner: How Brazil Plans to Govern AI: Reviewing PL 2338/2023
This article is part of our AI Policy Corner series, a collaboration between the Montreal AI Ethics Institute (MAIEI) and the Governance and Responsible AI Lab (GRAIL) at Purdue University. The series provides concise insights into critical AI policy developments from the local to international levels, helping our readers stay informed about the evolving landscape of AI governance. This week’s piece provides an overview of Brazil’s Bill on AI regulation, PL 2338/2023.
The Bill will provide structure to the nation’s AI ecosystem, prohibiting “excessive risk” systems, and making legal requirements for “high-risk” systems. The Bill also makes key provisions for Brazilian citizens to have oversight of those AI systems that impact them, ultimately promoting transparency and accountability.
To dive deeper, read the full article here.
❤️ Support Our Work
Help us keep The AI Ethics Brief free and accessible for everyone by becoming a paid subscriber on Substack or making a donation at montrealethics.ai/donate. Your support sustains our mission of democratizing AI ethics literacy and honours Abhishek Gupta’s legacy.
For corporate partnerships or larger contributions, please contact us at support@montrealethics.ai
✅ Take Action:
Have an article, research paper, or news item we should feature? Leave us a comment below — we’d love to hear from you!




