The AI Ethics Brief #176: AI's Material Reality
Data centres, digital art, and the material costs of AI. Plus SAIER Volume 7 returns.
Welcome to The AI Ethics Brief, a bi-weekly publication by the Montreal AI Ethics Institute. We publish every other Tuesday. Follow MAIEI on Bluesky and LinkedIn.
📌 Editor’s Note

In this Edition (TL;DR)
SAIER Volume 7 returns: After a three-year pause, our flagship State of AI Ethics Report (SAIER) returns to address the gap between AI ethics principles and actual practice. Publishes November 4, 2025 as a Special Edition of The AI Ethics Brief.
Data centres at a cost: AI’s infrastructure boom is straining power grids and harming local communities. AWS’s recent outage exposed our growing dependency, while the word “artificial” in artificial intelligence becomes increasingly misleading. We need efficiency innovations, not endless expansion.
AI and artistic intent: AI can generate images, but intentionality, not the tool itself, determines whether output becomes art. We need more thoughtful engagement, not more prompts.
What connects these stories:
AI’s infrastructure—physical, artistic, and ethical—has real costs, and those costs fall on communities least equipped to bear them. Data centres drain local power and water while Big Tech chases compute. AI image generators flood markets with superficial output. AI ethics produces endless principles while communities wait for action. When AI is treated as ethereal or abstract, we ignore who pays the price. Moving forward requires transparency about environmental impact, genuine community consultation, and thoughtful engagement with AI as a creative tool and not as a replacement for intention. SAIER Volume 7 aims to bridge this gap, offering concrete steps to move from principles to practice.
🔎 Where We Stand
Why the State of AI Ethics Report (SAIER) returns now
Since ChatGPT’s release in late 2022, AI ethics has exploded with voices, PR campaigns, and performative commitments. Billions flow into AI infrastructure while the gap between stated principles and actual practice widens daily. More money, more momentum, less accountability. Some say we’re dealing with an AI bubble.
MAIEI published six State of AI Ethics Reports between 2020-2022, offering grounded analysis when the field needed it. We paused to focus on growing The AI Ethics Brief, which now reaches almost 20,000 subscribers bi-weekly. The exponential growth in both AI deployment and AI ethics theatre created an urgent need: cut through the noise, document what’s actually working (or not), and centre community voices systematically excluded from mainstream governance.
Following hundreds of conversations and a close review of over 800 pieces published on the MAIEI website since 2018, one insight stood out: the field needs connection and interpretation. There’s a growing recognition that isolated efforts across sectors contain valuable knowledge that rarely gets shared or built upon, including lessons from quiet failures that never made headlines.

These themes came into sharp focus last week during a panel discussion hosted by the Inter-Council Network, a coalition representing 400+ Canadian civil society organizations. The topic: “Justice, Resistance, and Co-Creating the Future,” as part of a webinar series on “Global Solidarity in the Age of AI: Risks, Responsibilities, and Opportunities for Civil Society.”
The questions we wrestled with in our discussion are the same ones driving SAIER Volume 7:
How do we move from consultation theatre to genuine co-creation? Governments invite civil society to 30-day consultation sprints while industry lobbyists shape policy year-round (see Brief #175 for our critique on Canada’s AI Strategy Task Force missing the mark on inclusion).
Who has expertise, and who decides? Communities living with AI’s harms possess knowledge that industry and academia systematically devalue. Indigenous data sovereignty frameworks, worker-led auditing models, municipal AI governance experiments exist and work, yet remain excluded from mainstream policy and governance conversations.
MAIEI is supporting the Canada’s AI—Our Voices, Our Future campaign, which calls for a permanent Civil Society & Communities Council, meaningful community engagement, and a Public Benefit & Equity Test for all AI recommendations.
What does global solidarity actually require? Building on conversations like these webinars to create sustained relationship-building, resource sharing across borders, coordinated resistance to vendor lock-in, and centering Global South leadership in the broader movement for just AI governance.
These aren’t just policy questions, they reflect fundamental capacity gaps. Canada currently ranks 44th out of 47 countries in AI literacy, according to a recent KPMG-University of Melbourne study. This literacy gap limits who can meaningfully participate in AI governance processes at all.
SAIER Volume 7 is built on a simple premise: responsible AI has always been as much about capacity as it is about commitment. The gap between theoretical principles and practical implementations rarely reflects a lack of intent, but rather, missing infrastructure, institutional inertia, unclear mandates, or poorly designed incentives that collectively contribute to this challenge. The hard work often falls to those without formal authority, including local organizers, frontline workers, junior engineers, and researchers who work across silos.
SAIER Volume 7: AI at the Crossroads features:
17 distinct topics, 40+ external contributors from different disciplines, geographies, and lived experiences
Practitioners-first focus: What policies actually work? What interventions show measurable impact? Where are the gaps between framework and implementation?
Community-centered solutions: Moving beyond principles and declarations to concrete models of co-creation, participatory governance, and democratic AI oversight
AI is at an ethical crossroads where every dollar invested makes the decision to go another way more irreversible. The infrastructure gets built, the dependencies deepen, the power concentrates. Our report offers a different path, grounded in what communities are already building when given resources and genuine partnership.
The governance frameworks being written now will shape decades of technological development. If we want justice, equity, and solidarity embedded in these systems, we need to be in the rooms where they’re being written as co-creators.
SAIER Volume 7 publishes November 4, 2025 as a Special Edition of The AI Ethics Brief.
Please share your thoughts with the MAIEI community:
🚨 Recent Developments: What We’re Tracking
Data centre capacity at the cost of local communities
The AI boom is creating an infrastructure crisis. A record $40-billion acquisition deal for Aligned Data Centres by a consortium led by Nvidia, Microsoft, BlackRock, xAI, MGX of Abu Dhabi, Kuwait Investment Authority, and Temasek signals unprecedented data centre expansion. Investment bank UBS projects companies will spend $375 billion on data centres globally this year and $500 billion in 2026. President Trump’s July Executive Order fast-tracked this growth by opening federally-owned land for data centre construction.
The infrastructure cost is falling on local communities. Ireland data centres now consume 20% of the country’s electricity, expected to rise to 33% within the next five years. The consequences of this rapid expansion has forced Ireland to limit new construction in Dublin due to “significant risk” to power supplies, despite already hosting around 120 data centres in the area. In Mexico, residents of Las Cenizas experienced increased water and electricity outages after a Microsoft data centre was built nearby. When a hepatitis outbreak hit over the summer, residents lacked water to wash their hands. Mexico’s national power company blamed stray animals and lightning strikes, but the timing was clear to those living there.
📌 Our Position
AI’s physical infrastructure is being built at the expense of the communities forced to host it. As Kate Crawford argued, AI is “neither artificial nor intelligent.” As her work with Vladan Joler demonstrates in Anatomy of an AI System, AI requires massive raw materials, human labour, electricity, and water. Our late co-founder Abhishek Gupta’s “The Imperative for Sustainable AI Systems,” also laid out this carbon and resource cost in 2021, yet Big Tech’s compute demands continue to escalate (see: OpenAI’s Stargate project). When AWS went down recently, large swathes of the web were unusable for hours, including apps like Perplexity, Zoom, and Fortnite alongside websites, banks, and government services. The expansion of data centres will continue to strain national power grids and harm the daily lives of local communities.
The path forward isn’t more data centres; it’s smarter AI. Deepseek’s R1 model marked a step forward earlier this year (see Brief #157). The Chinese model has now revealed another efficiency gain: converting text and documents using up to 20x fewer tokens and dramatically reducing resource consumption. While these are early claims that require rigorous, third-party benchmarks to verify given the variability in tokenization schemes and real-world deployment contexts, the direction is promising. Efficiency gains like these deserve far more attention than plans to build more data centres across North America and beyond (such as in Malaysia). Until the industry prioritizes efficiency over expansion, local communities will continue paying the price for AI’s growth.
AI and Art: A new medium, or an unhelpful detractor?
In Brief #175, we covered generative AI’s impact on copyright. Now, a related debate is intensifying: Is AI image generation a creative tool or a threat to artistic practice?
Some artists like Kira Xonorika appreciate the unpredictability of AI image generation, which helps expand the boundaries of artistic expression. Others, like Beth Frey, initially enjoyed AI’s struggles with elements like hands, as these limitations provided nuance to image creation. She now feels that nuance is lost as the technology improves. The ease of access has also flooded art spaces with similar-looking AI-generated pieces, making it harder to distinguish thoughtful work from algorithmic output.
This homogeneity extends beyond artistic circles. Projects like Better Images of AI have documented how AI imagery, whether generated by AI or used to represent AI, has become repetitive and misleading. Search “AI images” and you’ll find the same sci-fi robots, glowing blue brains, and anthropomorphized figures. Better Images of AI argues these representations misrepresent the technology, reinforce harmful stereotypes, and limit public understanding of what AI systems actually are and do. The proliferation of AI image generators has, ironically, made this problem worse: the same generic aesthetics now flood photo libraries and content platforms, creating a self-referential cycle that obscures rather than illuminates.
📌 Our Position
The tool doesn’t determine whether something is art, the artist’s intent and process does. Pencils, paintbrushes, and canvas have always been accessible, yet not every drawing is art. What elevates an image to art is the story behind its creation, the emotions it evokes, the message it conveys, and the artist’s intentional choices.
AI can create aesthetically interesting images. What elevates that output to art is whether the artist has engaged meaningfully with the medium, whatever that medium may be. Writing five words into a prompt and calling the output art is not the same as an artist who deeply considers how AI serves their vision, reflects on its implications for their practice, and makes deliberate choices about when and how to use it.
AI is a tool. Like photography before it, it democratizes image-making. This accessibility means more people can create images, but it also means the market fills with superficial work (Note: on a related topic, we address AI’s role in misinformation and disinformation in Brief #168). Intentionality is what matters. Artists who thoughtfully consider how AI serves their vision create different work than those who simply let the algorithm do the thinking.
As Better Images of AI demonstrates, we need more intentional, diverse, and accurate visual representations of AI, whether those images are created by humans, AI, or a thoughtful combination of both.
Please share your thoughts with the MAIEI community:
❤️ Support Our Work
Consider joining the SAIER Champion’s Circle:
Help us keep The AI Ethics Brief free and accessible for everyone. Paid subscribers will be recognized in the State of AI Ethics Report (SAIER) Volume 7, publishing November 4, 2025. Your support sustains our mission of democratizing AI ethics literacy and honours Abhishek Gupta’s legacy.
For corporate partnerships or larger contributions, please contact us at support@montrealethics.ai
✅ Take Action:
Have an article, research paper, or news item we should feature? Leave us a comment below — we’d love to hear from you!




Citizen participation in AI policymaking should no longer be treated as a box to tick or a metric to showcase. True co-creation starts when those most affected have real power.
What I really appreciate about this is the focus on who pays the price for AI's infrastructure and ethics gap. It's so often the communities least empowered to resist it, and that needs to change. We talk so much about "alignment" in AI, but what you're doing here reframes it as a question of moral alignment, not technical compliance (which I align with)! Looking forward to reading Volume 7, as this kind of grounded, community-centred analysis is exactly what the field needs.