The AI Ethics Brief #174: The Unfinished Work of AI Ethics
Revisiting Abhishek Gupta's legacy on sustainability, creative labor, and global security one year after his passing.
Welcome to The AI Ethics Brief, a bi-weekly publication by the Montreal AI Ethics Institute. We publish every other Tuesday. Follow MAIEI on Bluesky and LinkedIn.
📌 Editor’s Note
In this Edition (TL;DR)
We mark the first anniversary of Abhishek Gupta’s passing by revisiting three pillars of his scholarship: sustainable AI, creative labor, and global security. His warnings about unsustainable infrastructure, the devaluation of artistic communities, and risks of geopolitical escalation remain more relevant than ever as industry practices accelerate rather than resolve these harms.
As we prepare to release the State of AI Ethics Report (SAIER) Volume 7 on November 4, his commitment to centering affected communities and demanding accountability from powerful institutions continues to guide our work.
This issue also features Ana Brandusescu’s analysis of governance “by AI” rather than “of AI,” exposing how government partnerships with Big Tech firms entrench exploitation and dependency.
From Africa’s Continental AI Strategy, examined in our AI Policy Corner with GRAIL at Purdue University, to Canada’s Cohere and Palantir contracts, governments worldwide risk ceding public authority to private actors.
We close with insights from the ALL IN Conference, where debates shifted from hype to implementation, raising urgent questions about sovereignty, safety, and pragmatic governance.
What connects these stories: The unfinished work of AI ethics lies in confronting power, not just technical shortcomings. From sustainability and creativity to governance and global security, this edition demonstrates how communities struggle to reclaim agency as AI adoption accelerates. The gap between rapid deployment and fragile safeguards keeps widening. Whether the harm comes through environmental costs, cultural erasure, weak governance, or geopolitical risks, the question is not what AI can do, but whether we will build democratic, accountable, and human-centered frameworks to guide its use.
Brief #174 Banner Image Credit: Distorted Forest Path by Lone Thomasky & Bits&Bäume, featured in Better Images of AI, licensed under CC-BY 4.0.
🔎 Where We Stand
As we mark the first anniversary of Abhishek Gupta’s passing, what lessons from his pioneering work on AI ethics remain urgently relevant today?
Abhishek Gupta, MAIEI’s Co-founder and Principal Researcher, sadly passed away one year ago today on September 30, 2024. As we prepare to release the State of AI Ethics Report (SAIER) Volume 7 on November 4, 2025, we revisit three of his pieces on AI sustainability, impacts on creative communities, and global security, to examine where we stand today.
1. The Environmental Reckoning: Sustainable AI Systems
In 2021, Abhishek won The Gradient Prize for “The Imperative for Sustainable AI Systems.” He documented how training a single large model could emit as much carbon as five cars over their entire lifetimes and warned that only well-resourced organizations could afford the computational infrastructure for these systems, centralizing power.
Crucially, he also proposed a path forward, advocating for sustainable alternatives such as elevating smaller, task-specific models, promoting alternate deployment strategies, and designing carbon-aware systems that consider environmental impact as a first-class design constraint.
Four years later, the situation has worsened. Data centres now consume 1 to 1.5 percent of global electricity, with AI workloads accelerating demand. The International Energy Agency projects this consumption could more than double to 945 TWh by 2030. While the machine learning community has made progress on model efficiency through techniques like knowledge distillation, pruning, and quantization, this misses the larger point: aggregate environmental impact continues to grow as foundation models embed themselves in every consumer application.
The industry’s response has been inadequate. Companies tout efficiency improvements while simultaneously scaling deployment across more products and services. This is not sustainability. It is greenwashing that obscures the fundamental problem Abhishek identified: AI deployment far outpaces the infrastructure needed to support it responsibly.
2. The Human Cost: AI’s Impact on Artists
In 2023, Abhishek co-authored “AI Art and Its Impact on Artists”, published at AIES 2023. The paper examined how generative image models trained on billions of scraped images were displacing working artists. Drawing on philosophers like John Dewey and Clive Bell, the authors argued that these systems are statistical tools that appropriate human creativity without consent or compensation, not artists themselves.
The entertainment industry’s response proves them correct. Hollywood has now signed Tilly Norwood as its first AI actress, a digital construct that can be deployed without compensation, creative input, or the emotional labor human actors provide. This is not innovation. It is the systematic devaluation of human expression in favour of cheaper synthetic alternatives.
The paper cited Shudu Gram, a digital supermodel created by a white British photographer, as an example of how AI-generated personas monetize synthetic identities while bypassing the people whose cultural expressions they appropriate. The pattern is clear: industries are using AI to extract value from human creativity while cutting out the humans who created it.
The central question stands: what do we lose when we replace human creativity with computational approximation? Art emerges from lived experience, from memory, identity, and cultural interpretation. Image generators replicate surface aesthetics but cannot engage in meaning-making or emotional synthesis. They produce content, not art.
The Chilling Effect on Cultural Production
The consequences extend beyond individual artists. Many now hesitate to share work online, knowing it will be scraped into training datasets without consent. This erodes intergenerational mentorship, reduces visibility for emerging voices, and harms marginalized communities already facing barriers in creative industries. Artists who depend on online visibility for commissions face direct economic harm.
When artists stop teaching, sharing, and mentoring, cultural transmission breaks down. We risk a future where corporate-controlled AI systems, trained on a fixed corpus of past work, define the boundaries of creative expression.
3. The Geopolitical Dimension: AI and Global Security
In “AI Missteps Could Unravel Global Peace and Security” (2024), published in IEEE Spectrum, Abhishek co-authored an analysis warning that deploying AI capabilities outpaced governance development, creating risks of miscalculation, unintended escalation, and erosion of human oversight in critical decisions.
Albania’s appointment of Diella, an AI system, as its first Minister of State for Artificial Intelligence, illustrates this tension. The system analyzes procurement data to identify fraud patterns, supposedly removing human bias. But this framing is misleading. It assumes the system’s training data reflects patterns worth perpetuating rather than historical corruption that should be corrected. It obscures accountability: when the system errs, who is responsible? How do citizens challenge incorrect determinations?
This is governance theatre. Deploying an AI system creates the appearance of modernization and objectivity while avoiding the harder work of building transparent, accountable institutions with genuine human oversight.
The broader geopolitical landscape has grown more complex. AI capabilities have become markers of national power, with countries racing to develop systems for civilian and military use. The dual-use nature of these technologies creates uncertainty about adversaries’ capabilities and intentions, exactly the conditions Abhishek warned would increase the risk of miscalculation.
Why This Matters Now
As we develop SAIER Volume 7, Abhishek’s work demonstrates that responsible AI requires confronting power, not just technical problems. His scholarship centered the communities most affected by AI systems: artists losing livelihoods, populations in the Global South bearing environmental costs, citizens subject to opaque automated decisions.
The questions he raised remain urgent: How do we build governance frameworks that match the pace of technological change? How do we ensure meaningful recourse for those harmed by AI systems? How do we prevent AI capabilities from entrenching existing power imbalances?
These questions shape whether AI serves democratic values or undermines them, whether it expands human creativity or constrains it, whether it contributes to stability or fragility.
Abhishek’s legacy lives in the work of building civic competence around AI. We continue his commitment to grounding AI ethics in lived experiences of affected communities, demanding accountability from powerful institutions, and imagining futures where technology serves human flourishing rather than replacing it.
In honour of Abhishek’s memory and contributions to the field, we invite you to revisit his published works in our Special Edition from April 2025.
Consider joining the SAIER Champion’s Circle:
Paid subscribers to The AI Ethics Brief will be highlighted in the Acknowledgment page of SAIER Volume 7, unless you indicate otherwise. If you’re already a subscriber and enjoy reading this newsletter, consider upgrading to directly support this work, be recognized, and help us build the civic infrastructure for long-term impact.
How You Can Contribute to SAIER Volume 7
If your work reflects this commitment to grounding AI in lived experiences and demanding accountability, we want to hear from you, whether your initiatives succeeded, struggled, or are still evolving.
We’re especially seeking:
Implementation stories that moved beyond paper to practice
Community-led initiatives that addressed AI harms without formal authority
Institutional experiments that navigated AI adoption under constraints
Quiet failures and what they revealed about systemic barriers
Cross-sector collaborations that found unexpected solutions
Community organizing strategies that built power around AI issues
As we continue shaping SAIER Volume 7, your stories can help build a resource that is practical, rigorous, and genuinely useful for those navigating AI implementation in 2025. Together, we can document what’s working, what barriers still need addressing, and how we might move forward collectively, deliberately, and with care.
Please share your thoughts with the MAIEI community:
🚨 Recent Developments: What We’re Tracking
Privacy and Governance: From Governing AI to Being Governed by It
Ana Brandusescu’s analysis examines the current shift in governance of AI to governance by AI, noting the increasing presence of AI in public services, often through partnerships with Big Tech companies.
This privatization trend, intensified by generative AI adoption, is producing measurable harms to human rights, workers, and the environment in three main ways:
Labour and environmental exploitation:
AI development depends on underpaid and traumatized workers in the Global South (e.g., Kenyan content moderation lawsuits against Meta/Sama).
Environmental costs are mounting: data centres consume vast amounts of water and energy, with projects such as Alberta’s proposed $70B facility threatening First Nations treaty rights.
These harms reflect forms of “digital colonialism” and “data colonialism,” where global South labour and local environments are exploited to sustain Big Tech’s extractive model.
Problematic partnerships with human rights risks:
Governments are investing heavily in companies like Cohere (Canada’s $240M investment), which in turn partner with Palantir, a firm linked to surveillance, immigration enforcement, and military operations in conflict zones (which we covered in Brief #164).
Such partnerships entrench technical dependencies while aligning public institutions with corporations documented to cause human rights harms.
Concentration of power through acquisition: ,
Canadian AI firms have repeatedly been acquired by foreign companies, creating structural dependency: Google’s acquisition of North Inc. and ServiceNow’s acquisition of Element AI illustrate recurring patterns.
These dynamics reduce Canadian sovereignty over AI infrastructure and intellectual property while concentrating global power in a handful of tech giants.
Brandusescu calls for stronger safeguards to reassert democratic oversight:
Conditional investment criteria before governments fund or partner with AI firms, to avoid cases like Cohere/Palantir.
Maintain a public registry of companies linked to human rights abuses (e.g. Palantir) to guide procurement decisions.
Mandatory algorithmic impact assessments (AIAs) with ongoing monitoring and legal enforcement.
Independent oversight to replace industry self-regulation, with international coordination to address labour and environmental harms across borders.
📌 Our Position
The evidence is clear: self-regulation has failed to prevent AI harms. Companies continue to deploy systems that violate privacy, perpetuate bias, and exploit workers, while governments fund these practices through procurement contracts. Canada’s partnership with Cohere and various government contracts with Palantir, despite its documented role in surveillance and immigration enforcement, demonstrate this failure.
The problem extends beyond individual partnerships. Governments are building critical infrastructure dependencies on companies with poor human rights records. This is not just a Canadian problem. In the UK, healthcare workers and civil society organizations are demanding cancellation of the NHS contract with Palantir. The pattern repeats globally: governments rush to adopt AI systems without adequate safeguards, accountability mechanisms, or consideration of who profits from these arrangements.
Brandusescu’s recommendations offer a starting point, but they require political will to implement. Investment criteria and company registries only work if governments enforce them. Algorithmic impact assessments only matter if they have teeth, with real consequences for noncompliance. Independent oversight only functions if it has authority and resources.
The fundamental question is whether governments will assert democratic control over AI deployment in public services, or whether they will continue ceding authority to private companies whose interests do not align with public welfare. Current trends suggest the latter, making the policy interventions Brandusescu proposes increasingly urgent.
Did we miss anything? Let us know in the comments below.
💭 Insights & Perspectives:
AI Policy Corner: AI and Security in Africa: Assessing the African Union’s Continental AI Strategy
This edition of our AI Policy Corner, produced in partnership with the Governance and Responsible AI Lab (GRAIL) at Purdue University, examines the African Union’s Continental AI Strategy, a unified framework designed to advance AI development across member states while addressing the continent’s unique security vulnerabilities.
The strategy outlines five key focus areas: harnessing AI for socioeconomic development, minimizing ethical and security risks, building technical capacity, fostering international cooperation, and stimulating investment, with particular emphasis on security threats including AI-enabled disinformation, extremist propaganda, and cyberattacks by non-state actors exploiting low-barrier AI tools.
The analysis reveals a critical policy tension: while the strategy strongly endorses transparency principles and AI safety assessments for emerging technologies like generative AI, it operates as a non-binding voluntary framework that lacks detailed counterterrorism measures and practical defensive guidelines for integrating AI into national security operations. This gap is particularly concerning given African states face limited technical expertise, inadequate funding, and accelerating AI adoption by violent extremist groups. These challenges risk widening the implementation gap between the strategy’s ambitious vision and member states’ varying political will and capacity to execute AI-ready security infrastructures.
To dive deeper, read the full article here.
ALL IN Conference 2025: Four Key Takeaways from Montreal
More than 6,500 leaders and innovators from over 40 countries gathered in Montreal for the ALL IN conference on September 24–25, 2025, where MAIEI’s Kei Baritugo identified four major themes signaling a transition from AI hype to pragmatic implementation. Canada announced an AI Strategy Task Force operating on a 30-day timeline to deliver recommendations by November 2025, including modernizing 25-year-old privacy laws. Professor Yoshua Bengio emphasized that users need confidence AI will behave well and not do “something weird and dangerous,” framing this as both a safety and capability issue. AI sovereignty moved from concept to reality through TELUS’s first fully sovereign AI factory, ensuring Canadian data remains within national borders. The analysis reveals a critical policy tension: while mature agentic AI deployment remains years away due to non-deterministic outputs and organizational barriers, the shift from “What can AI do?” to “How do we ensure AI serves societal interests?” reflects a maturing ecosystem where early governance investment becomes a competitive advantage rather than mere compliance.
To dive deeper, read the full article here.
❤️ Support Our Work
Help us keep The AI Ethics Brief free and accessible for everyone by becoming a paid subscriber on Substack or making a donation at montrealethics.ai/donate. Your support sustains our mission of democratizing AI ethics literacy and honours Abhishek Gupta’s legacy.
For corporate partnerships or larger contributions, please contact us at support@montrealethics.ai




I appreciate the promotion of Abhishek’s work featured in this brief. His work must continue. I also wanted to share a recent piece I published in AI & Society related to his tireless advocacy for sustainable AI: “Sustainable AI needs to accept economic reality.” In this short article, I move beyond gripes about uncertain estimates of environmental impact and explain how we can take lessons from the literature on voluntary environmental governance to encourage companies to disclose the environmental footprint of their AI systems. You can find it here: https://rdcu.be/eDMBw