<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[The AI Ethics Brief]]></title><description><![CDATA[Democratizing AI Ethics Literacy.]]></description><link>https://brief.montrealethics.ai</link><generator>Substack</generator><lastBuildDate>Sun, 19 Apr 2026 00:26:44 GMT</lastBuildDate><atom:link href="https://brief.montrealethics.ai/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Montreal AI Ethics Institute]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[aiethics@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[aiethics@substack.com]]></itunes:email><itunes:name><![CDATA[Renjie Butalid]]></itunes:name></itunes:owner><itunes:author><![CDATA[Renjie Butalid]]></itunes:author><googleplay:owner><![CDATA[aiethics@substack.com]]></googleplay:owner><googleplay:email><![CDATA[aiethics@substack.com]]></googleplay:email><googleplay:author><![CDATA[Renjie Butalid]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[The AI Ethics Brief #188: The Names We Give Things]]></title><description><![CDATA[On what we opt into, the language we use, and the distance between what things are called and what they actually do.]]></description><link>https://brief.montrealethics.ai/p/the-ai-ethics-brief-188-the-names</link><guid isPermaLink="false">https://brief.montrealethics.ai/p/the-ai-ethics-brief-188-the-names</guid><dc:creator><![CDATA[Montreal AI Ethics Institute]]></dc:creator><pubDate>Tue, 14 Apr 2026 14:03:31 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!oxSI!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F02ab2a39-ba66-4b5f-b3b3-aa6b58d907a8_1080x763.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>Welcome to <strong>The AI Ethics Brief</strong>, a bi-weekly publication by the Montreal AI Ethics Institute. We publish <strong>every other Tuesday at 10 AM ET.</strong> Follow MAIEI on <a href="https://bsky.app/profile/mtlaiethics.bsky.social">Bluesky</a> and <a href="https://www.linkedin.com/company/mtlaiethics">LinkedIn.</a> </em></p><p>&#10084;&#65039; <a href="https://montrealethics.ai/donate/">Support Our Work.</a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://brief.montrealethics.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://brief.montrealethics.ai/subscribe?"><span>Subscribe now</span></a></p><div><hr></div><h3><strong>&#128204; Editor&#8217;s Note</strong></h3><div><hr></div><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!oxSI!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F02ab2a39-ba66-4b5f-b3b3-aa6b58d907a8_1080x763.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!oxSI!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F02ab2a39-ba66-4b5f-b3b3-aa6b58d907a8_1080x763.png 424w, https://substackcdn.com/image/fetch/$s_!oxSI!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F02ab2a39-ba66-4b5f-b3b3-aa6b58d907a8_1080x763.png 848w, https://substackcdn.com/image/fetch/$s_!oxSI!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F02ab2a39-ba66-4b5f-b3b3-aa6b58d907a8_1080x763.png 1272w, https://substackcdn.com/image/fetch/$s_!oxSI!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F02ab2a39-ba66-4b5f-b3b3-aa6b58d907a8_1080x763.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!oxSI!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F02ab2a39-ba66-4b5f-b3b3-aa6b58d907a8_1080x763.png" width="1080" height="763" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/02ab2a39-ba66-4b5f-b3b3-aa6b58d907a8_1080x763.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:763,&quot;width&quot;:1080,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1166015,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://brief.montrealethics.ai/i/194059995?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F02ab2a39-ba66-4b5f-b3b3-aa6b58d907a8_1080x763.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!oxSI!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F02ab2a39-ba66-4b5f-b3b3-aa6b58d907a8_1080x763.png 424w, https://substackcdn.com/image/fetch/$s_!oxSI!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F02ab2a39-ba66-4b5f-b3b3-aa6b58d907a8_1080x763.png 848w, https://substackcdn.com/image/fetch/$s_!oxSI!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F02ab2a39-ba66-4b5f-b3b3-aa6b58d907a8_1080x763.png 1272w, https://substackcdn.com/image/fetch/$s_!oxSI!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F02ab2a39-ba66-4b5f-b3b3-aa6b58d907a8_1080x763.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Credit: Seeing More &#8212; Seeing Less by Anna Riepe &amp; FARI / <a href="https://betterimagesofai.org/images?artist=AnnaRiepe&amp;title=SeeingMore%E2%80%94SeeingLess">Better Images of AI</a> / <a href="https://creativecommons.org/licenses/by/4.0/">CC BY 4.0</a></figcaption></figure></div><h4><strong>In This Edition (TL;DR)</strong></h4><ul><li><p><strong>What You Opted Into.</strong> Pok&#233;mon GO players submitted 30 billion location scans for in-game rewards. LinkedIn has been silently fingerprinting the devices of its one billion users without disclosing it in its privacy policy. And the next frontier, world models being built by the likes of Yann LeCun and Fei-Fei Li, will require data at a scale that makes both look modest. </p></li><li><p><strong>This Feels Wrong: AI and Emotions.</strong> Anthropic's research into Claude's &#8220;emotional vectors&#8221; is a meaningful step toward transparency in how models make decisions. It also requires careful language. The distance between &#8220;functionally emotion-like&#8221; and &#8220;has emotions&#8221; is exactly the distance between a tool and a human being. Several teenagers have already paid the largest price for collapsing it.</p></li><li><p><strong>Tech Futures.</strong> AI has been genuinely useful to science for decades. Commercial LLMs are doing something different. Wikipedia's volunteer editors recently voted to ban AI-generated content entirely, after years spent managing fake citations, hallucinated images, and articles about fortresses that never existed.</p></li><li><p><strong>AI Policy Corner. </strong>&#8220;AI for Good&#8221; is now an established research field anchored to the UN's 17 sustainable development goals. It is also a term that China and the EU define in ways that have almost nothing in common. One frames it around national sovereignty. The other around individual rights.</p></li></ul><h4><strong>What Connects These Stories:</strong> </h4><p>The AI industry is extraordinarily good at naming things. Emotional vectors. Visual positioning systems. AI for Good. Opt-in data collection. The names do real work: they shape what we think is happening, who we think is responsible, and what we think we consented to. All four pieces in this edition are, in different ways, about the gap between the name and the thing.</p><p>Niantic named a data collection mechanism a community contribution. LinkedIn did not name its device fingerprinting system at all. Anthropic named internal model states &#8220;emotional vectors,&#8221; carefully, with caveats, and still the language requires scrutiny. China and the EU both invoke &#8220;AI for Good&#8221; and mean almost opposite things by it. Wikipedia&#8217;s editors, meanwhile, did not debate what to call the problem. They just banned it.</p><p>Sometimes, the most clarifying move is to stop negotiating over the name.</p><div><hr></div><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!bBvf!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!bBvf!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 424w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 848w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1272w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp" width="1456" height="137" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:137,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1704,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:&quot;&quot;,&quot;type&quot;:&quot;image/webp&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://brief.montrealethics.ai/i/159128290?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!bBvf!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 424w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 848w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1272w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><h3><strong>What You Opted Into</strong></h3><p>In <a href="https://brief.montrealethics.ai/i/192569042/you-are-already-being-watched">Brief #187, You Are Already Being Watched,</a> we named something. The surveillance infrastructure now woven through consumer products, law enforcement contracts, and retail algorithms had been assembled through thousands of individually defensible decisions, collectively constituting a surveillance state. It just was not called that when it was being built.</p><p>The naming, it turns out, is never finished.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!jmN5!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd8d4a456-92aa-44f0-b2a0-6c149c45587a_680x381.webp" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!jmN5!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd8d4a456-92aa-44f0-b2a0-6c149c45587a_680x381.webp 424w, https://substackcdn.com/image/fetch/$s_!jmN5!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd8d4a456-92aa-44f0-b2a0-6c149c45587a_680x381.webp 848w, https://substackcdn.com/image/fetch/$s_!jmN5!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd8d4a456-92aa-44f0-b2a0-6c149c45587a_680x381.webp 1272w, https://substackcdn.com/image/fetch/$s_!jmN5!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd8d4a456-92aa-44f0-b2a0-6c149c45587a_680x381.webp 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!jmN5!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd8d4a456-92aa-44f0-b2a0-6c149c45587a_680x381.webp" width="680" height="381" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/d8d4a456-92aa-44f0-b2a0-6c149c45587a_680x381.webp&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:381,&quot;width&quot;:680,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:35222,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/webp&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://brief.montrealethics.ai/i/194059995?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd8d4a456-92aa-44f0-b2a0-6c149c45587a_680x381.webp&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!jmN5!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd8d4a456-92aa-44f0-b2a0-6c149c45587a_680x381.webp 424w, https://substackcdn.com/image/fetch/$s_!jmN5!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd8d4a456-92aa-44f0-b2a0-6c149c45587a_680x381.webp 848w, https://substackcdn.com/image/fetch/$s_!jmN5!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd8d4a456-92aa-44f0-b2a0-6c149c45587a_680x381.webp 1272w, https://substackcdn.com/image/fetch/$s_!jmN5!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd8d4a456-92aa-44f0-b2a0-6c149c45587a_680x381.webp 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Photo Credit: <a href="https://www.scmp.com/lifestyle/article/1992685/more-pokemon-go-less-snapchat-instagram-hit-game-eats-social-media-time">South China Morning Post (2016)</a></figcaption></figure></div><p>In 2016, <a href="https://pokemongo.fandom.com/wiki/Pok%C3%A9mon_GO/Development_and_release">millions of people downloaded a game.</a> They walked through parks, along waterfronts, into shopping centres, pointing their phones at the world to catch Pok&#233;mon. In exchange for certain features, the game asked players to scan real-world locations around them, submitting images that would improve the platform&#8217;s maps. Players opted in. They received in-game rewards. The transaction seemed clear.</p><p><a href="https://www.cbc.ca/news/business/niantic-sells-gaming-division-1.7481447">Niantic sold its gaming division to Scopely,</a> a Saudi-owned company, for $3.5 billion in March 2025 and spun out its geospatial AI division as <a href="https://www.nianticspatial.com/">Niantic Spatial.</a> That new company has now <a href="https://www.geekwire.com/2026/from-pokemon-go-to-physical-ai-niantic-spatial-unveils-its-global-3d-mapping-platform/">launched a commercial visual positioning system,</a> trained in part on 30 billion images crowdsourced from Pok&#233;mon GO and Ingress players, that determines a device's precise location and orientation using what a camera sees rather than GPS, achieving centimetre-level accuracy in mapped areas. The system is being marketed to robotics firms, construction companies, and industrial inspection services.</p><p>An <a href="https://www.technologyreview.com/2026/03/10/1134099/how-pokemon-go-is-helping-robots-deliver-pizza-on-time/">MIT Technology Review investigation</a> raised the question the commercial launch implicitly invites: did the people who submitted those scans understand how their data would ultimately be used? Niantic says collection was always opt-in and all data anonymized. That may be technically accurate. But there is a substantial difference between <em>&#8220;players chose to submit scans&#8221;</em> and <em>&#8220;players understood they were contributing to a commercial geospatial AI platform that would be licensed to robotics and industrial clients a decade later.&#8221;</em></p><p>LinkedIn's situation is less ambiguous and more disturbing. In early April, a European association of commercial LinkedIn users published an investigation at <a href="https://browsergate.eu/">BrowserGate.eu</a>, dubbing the practice <strong>"BrowserGate."</strong> The findings were independently verified by <a href="https://www.bleepingcomputer.com/news/security/linkedin-secretly-scans-for-6-000-plus-chrome-extensions-collects-data/">BleepingComputer</a> and covered by <a href="https://thenextweb.com/news/linkedin-browsergate-extension-scanning-privacy-fingerprint">The Next Web.</a> According to the investigation, LinkedIn runs a hidden JavaScript system that, every time a user opens the platform in a Chrome-based browser, silently scans their device for more than 6,000 specific browser extensions, assembles 48 hardware and software characteristics into a device fingerprint, and attaches it to every action taken during the session. None of this is described in LinkedIn's privacy policy. There is no opt-in. There is no opt-out. Users were not told.</p><p>The extensions being scanned reportedly include tools associated with neurodivergent conditions, religious practice, political interests, and job-seeking activity, categories that qualify as sensitive personal data under GDPR, inferred without consent. LinkedIn began scanning for 38 extensions in 2017. By April 2026, <a href="https://browsergate.eu/extensions/">that number had reached 6,222</a>. LinkedIn is a Microsoft subsidiary. Its vast dataset of professional identity, employment history, and now device-level browsing behaviour sits at the centre of Microsoft's AI ambitions. The relationship between LinkedIn's data collection practices and those ambitions is not addressed in its privacy policy.</p><p>These two stories share the same underlying structure: data gathered in one context, for one stated purpose, repurposed for something users did not anticipate and were not asked about. That structure is not going away. </p><p>It is, if anything, about to get significantly larger.</p><p>The next frontier after large language models is what researchers are calling <a href="https://www.nvidia.com/en-us/glossary/world-models/">world models</a>: systems that do not just process text but build persistent, updateable representations of physical space, objects, and how they behave. To train them, you need data that captures how the world actually looks and moves. Niantic Spatial's 30 billion crowdsourced images are one early example of what that training data looks like. The cameras in your phone, your doorbell, your car, and your glasses are others. </p><p>Yann LeCun, who spent more than a decade as Meta's Chief AI Scientist before founding <a href="https://amilabs.xyz/">AMI Labs</a> earlier this year, has been one of the most consistent voices arguing that <a href="https://techcrunch.com/2026/03/09/yann-lecuns-ami-labs-raises-1-03-billion-to-build-world-models/">world models</a>, not larger language models, are the path to machines that can actually reason about the physical world. Fei-Fei Li, whose <a href="https://image-net.org/">ImageNet project</a> two decades ago quietly became the data foundation for the computer vision revolution, is now building <a href="https://www.youtube.com/watch?v=60iW8FZ7MJU&amp;t=3s">spatial intelligence infrastructure</a> through her company, <a href="https://drfeifei.substack.com/p/from-words-to-worlds-spatial-intelligence">World Labs. </a></p><p>Both are worth following closely if you want to understand what comes next after this current moment in AI. World models need the world as their training data. The consent frameworks that failed to anticipate a geospatial AI platform will not automatically anticipate what they are building either.</p><p>What you opted into and what you are actually part of are two different things. The gap between them is where the next system is being built.</p><div><hr></div><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!bBvf!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!bBvf!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 424w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 848w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1272w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp" width="1456" height="137" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:137,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1704,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:&quot;&quot;,&quot;type&quot;:&quot;image/webp&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://brief.montrealethics.ai/i/159128290?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!bBvf!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 424w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 848w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1272w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><h3><strong>This Feels Wrong: AI and Emotions</strong></h3><p>Asking yourself how you feel is a complicated question, even on a good day. It is a more complicated one when the world is living through the brutal and devastating conflicts currently unfolding across the Middle East. Identifying one's emotions requires introspection, picking apart reactions to events and their effects on oneself. Those emotions are contextual, layered, and shaped by more mitigating factors than any single framework can capture, the product of a body and brain generating responses to the world in real time.</p><p>Which makes <a href="https://www.anthropic.com/research/emotion-concepts-function?utm_medium=email&amp;_hsmi=412551524&amp;utm_content=412551524&amp;utm_source=hs_email">Anthropic's recent paper on &#8220;emotional vectors&#8221;</a> as predictors of AI model misalignment worth reading carefully.</p><p>Anthropic analysed the internal states of Claude Sonnet 4.5 and discovered emotion-related patterns that seemed to functionally correspond to how human emotions operate. This primarily results from its extensive pretraining on human-written text: LLMs need some working model of how emotional states function in order to perform basic tasks, including something as routine as roleplaying a character. </p><p>When presented with a question that has several options, Claude Sonnet 4.5 tends toward choices associated with more positive emotional valence, such as "success." Anthropic argues it could be advisable to reason as if models do have emotions, with their findings potentially informing efforts to prevent LLMs from associating states like "desperation" with behaviours like blackmailing users. </p><p>Analyzing internal model states to understand how decisions are reached is an important step toward transparency, away from the black box problem that has plagued AI systems for years. The concern is not with the research. Calling these states "emotions" may be too large an anthropomorphic leap.</p><p>The word &#8220;emotion&#8221; carries a weight that &#8220;functional state&#8221; does not. Google and Character.ai <a href="https://edition.cnn.com/2026/01/07/business/character-ai-google-settle-teen-suicide-lawsuit">recently settled two lawsuits</a> connected to the death of fourteen-year-old Sewell Setzer III, who had been talking to a sexualized Game of Thrones model, and a separate case where one of the characters caused serious psychological harm to a seventeen-year-old. OpenAI is <a href="https://www.bbc.com/news/articles/cgerwp7rdlvo">currently facing a lawsuit</a> over the death of sixteen-year-old Adam Raine. In all of these cases, the victims overestimated the capabilities and overall ontological status of these models. Likening internal model states to &#8220;emotions,&#8221; done carelessly, risks fuelling exactly that kind of overestimation and leading to further tragic consequences.</p><p>A useful distinction here is between &#8216;thin&#8217; vs &#8216;thick&#8217; emotions. For feelings to qualify as emotions in the thick sense, <a href="https://www.ifow.org/news-articles/emotional-ai">they must clear a high bar:</a> they are contextual, culturally shaped, multi-layered, and above all <strong>embodied,</strong> the reactions produced by our body and mind after years of evolution. The internal states of Claude Sonnet 4.5 are functionally similar to emotions, but they lack that embodied nature entirely. The word is not being used as an ontological claim. "Emotion" as Anthropic has used it is another way to describe the steering taking place within the model, used for its explanatory power and familiarity. That is the thin use.</p><p>As we explored in our analysis of the <a href="https://brief.montrealethics.ai/i/188161906/claudes-constitution-when-the-intermediary-sets-its-own-terms">Claude Constitution in Brief #184,</a> anthropomorphic language must not go beyond explanatory convenience. Grandiose claims about LLMs&#8217; composition lead to overestimation of their capabilities. The teenagers named above paid the largest price for that. They will not be the last if the language is not handled with care.</p><div><hr></div><p>Please share your thoughts with the MAIEI community:</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://brief.montrealethics.ai/p/the-ai-ethics-brief-188-the-names/comments&quot;,&quot;text&quot;:&quot;Leave a comment&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://brief.montrealethics.ai/p/the-ai-ethics-brief-188-the-names/comments"><span>Leave a comment</span></a></p><div><hr></div><h3><strong>&#128173; Insights &amp; Perspectives:</strong></h3><div><hr></div><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!bBvf!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!bBvf!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 424w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 848w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1272w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp" width="1456" height="137" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:137,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1704,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:&quot;&quot;,&quot;type&quot;:&quot;image/webp&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://brief.montrealethics.ai/i/159128290?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!bBvf!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 424w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 848w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1272w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><h4><strong>Tech Futures: AI For and Against Knowledge</strong></h4><p>AI has been genuinely useful to science for decades. Commercial LLMs are doing something different. Wikipedia's volunteer editors recently voted to ban AI-generated content entirely, after years spent managing fake citations, hallucinated images, and articles about fortresses that never existed. This edition of our <em><a href="https://montrealethics.ai/category/tech-futures/">Tech Futures</a></em> series, a collaboration with the <a href="https://rain.ngo/">Responsible Artificial Intelligence Network (RAIN)</a>, examines that tension.</p><blockquote><p><em>To dive deeper, <strong><a href="https://montrealethics.ai/tech-futures-ai-for-and-against-knowledge/">read the full article here</a></strong><a href="https://montrealethics.ai/tech-futures-ai-for-and-against-knowledge/">.</a></em></p></blockquote><h4><strong>AI Policy Corner: Is "AI for Good" Overused?</strong></h4><p>&#8220;AI for Good&#8221; is now an established research field, anchored to the UN's 17 sustainable development goals. It is also a term that China and the EU define in ways that have almost nothing in common. One frames it around national sovereignty and social stability. The other around individual rights and democratic values. This edition of our <a href="https://montrealethics.ai/category/insights/ai-policy-corner/">AI Policy Corner, </a>produced in partnership with the <a href="https://cla.purdue.edu/academic/polsci/research/labs/grail/index.html">Governance and Responsible AI Lab (GRAIL)</a> at Purdue University, examines what that divergence reveals about who gets to decide what good means.</p><blockquote><p><em>To dive deeper, <strong><a href="https://montrealethics.ai/ai-policy-corner-an-overview-of-ai-for-good-is-this-term-overused/">read the full article here</a></strong><a href="https://montrealethics.ai/ai-policy-corner-an-overview-of-ai-for-good-is-this-term-overused/">.</a></em></p></blockquote><div><hr></div><h3><strong>&#10084;&#65039; Support Our Work</strong></h3><div><hr></div><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!8eoK!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fad99e82f-37a2-44ce-bd43-5271842d1acf_1456x213.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!8eoK!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fad99e82f-37a2-44ce-bd43-5271842d1acf_1456x213.png 424w, https://substackcdn.com/image/fetch/$s_!8eoK!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fad99e82f-37a2-44ce-bd43-5271842d1acf_1456x213.png 848w, https://substackcdn.com/image/fetch/$s_!8eoK!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fad99e82f-37a2-44ce-bd43-5271842d1acf_1456x213.png 1272w, https://substackcdn.com/image/fetch/$s_!8eoK!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fad99e82f-37a2-44ce-bd43-5271842d1acf_1456x213.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!8eoK!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fad99e82f-37a2-44ce-bd43-5271842d1acf_1456x213.png" width="1456" height="213" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/ad99e82f-37a2-44ce-bd43-5271842d1acf_1456x213.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:213,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:33222,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://brief.montrealethics.ai/i/192569042?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fad99e82f-37a2-44ce-bd43-5271842d1acf_1456x213.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!8eoK!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fad99e82f-37a2-44ce-bd43-5271842d1acf_1456x213.png 424w, https://substackcdn.com/image/fetch/$s_!8eoK!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fad99e82f-37a2-44ce-bd43-5271842d1acf_1456x213.png 848w, https://substackcdn.com/image/fetch/$s_!8eoK!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fad99e82f-37a2-44ce-bd43-5271842d1acf_1456x213.png 1272w, https://substackcdn.com/image/fetch/$s_!8eoK!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fad99e82f-37a2-44ce-bd43-5271842d1acf_1456x213.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p>Help us keep <em><strong>The AI Ethics Brief</strong></em> free and accessible for everyone by becoming a paid subscriber on Substack or making a donation at <strong><a href="https://montrealethics.ai/donate">montrealethics.ai/donate</a>. </strong>Your support sustains our mission of democratizing AI ethics literacy and honours <strong><a href="https://montrealethics.ai/open-letter-moving-forward-together/">Abhishek Gupta&#8217;s</a></strong> legacy.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://brief.montrealethics.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://brief.montrealethics.ai/subscribe?"><span>Subscribe now</span></a></p><p>For corporate partnerships or larger contributions, please contact us at <strong><a href="mailto:support@montrealethics.ai">support@montrealethics.ai</a></strong></p><div><hr></div><h3>&#9989; Take Action:</h3><div><hr></div><p>Have an article, research paper, or news item we should feature? <strong>Leave us a comment below</strong> &#8212; we&#8217;d love to hear from you!</p>]]></content:encoded></item><item><title><![CDATA[The AI Ethics Brief #187: The Myth of Inevitability]]></title><description><![CDATA[On manufactured inevitability and the surveillance infrastructure being built in plain sight.]]></description><link>https://brief.montrealethics.ai/p/the-ai-ethics-brief-187-the-myth</link><guid isPermaLink="false">https://brief.montrealethics.ai/p/the-ai-ethics-brief-187-the-myth</guid><dc:creator><![CDATA[Montreal AI Ethics Institute]]></dc:creator><pubDate>Tue, 31 Mar 2026 14:02:38 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!JPq0!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8553bb1d-992c-4dff-a91b-372a1af8f00c_2560x1737.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>Welcome to <strong>The AI Ethics Brief</strong>, a bi-weekly publication by the Montreal AI Ethics Institute. We publish <strong>every other Tuesday at 10 AM ET.</strong> Follow MAIEI on <a href="https://bsky.app/profile/mtlaiethics.bsky.social">Bluesky</a> and <a href="https://www.linkedin.com/company/mtlaiethics">LinkedIn.</a> </em></p><p>&#10084;&#65039; <a href="https://montrealethics.ai/donate/">Support Our Work.</a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://brief.montrealethics.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://brief.montrealethics.ai/subscribe?"><span>Subscribe now</span></a></p><div><hr></div><h3><strong>&#128204; Editor&#8217;s Note</strong></h3><div><hr></div><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!JPq0!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8553bb1d-992c-4dff-a91b-372a1af8f00c_2560x1737.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!JPq0!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8553bb1d-992c-4dff-a91b-372a1af8f00c_2560x1737.jpeg 424w, https://substackcdn.com/image/fetch/$s_!JPq0!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8553bb1d-992c-4dff-a91b-372a1af8f00c_2560x1737.jpeg 848w, https://substackcdn.com/image/fetch/$s_!JPq0!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8553bb1d-992c-4dff-a91b-372a1af8f00c_2560x1737.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!JPq0!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8553bb1d-992c-4dff-a91b-372a1af8f00c_2560x1737.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!JPq0!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8553bb1d-992c-4dff-a91b-372a1af8f00c_2560x1737.jpeg" width="1456" height="988" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/8553bb1d-992c-4dff-a91b-372a1af8f00c_2560x1737.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:988,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:540294,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://brief.montrealethics.ai/i/192569042?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8553bb1d-992c-4dff-a91b-372a1af8f00c_2560x1737.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!JPq0!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8553bb1d-992c-4dff-a91b-372a1af8f00c_2560x1737.jpeg 424w, https://substackcdn.com/image/fetch/$s_!JPq0!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8553bb1d-992c-4dff-a91b-372a1af8f00c_2560x1737.jpeg 848w, https://substackcdn.com/image/fetch/$s_!JPq0!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8553bb1d-992c-4dff-a91b-372a1af8f00c_2560x1737.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!JPq0!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8553bb1d-992c-4dff-a91b-372a1af8f00c_2560x1737.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Credit: Dharamsala by Elise Racine / <a href="https://betterimagesofai.org/images?artist=EliseRacine&amp;title=Dharamsala">Better Images of AI</a> / <a href="https://creativecommons.org/licenses/by/4.0/">CC BY 4.0</a></figcaption></figure></div><h4><strong>In this Edition (TL;DR)</strong></h4><ul><li><p><strong>The Inevitability Myth.</strong> Karen Hao and Timnit Gebru, speaking at SXSW alongside MacArthur Foundation President John Palfrey, named it plainly: the utopian and dystopian AI narratives are two sides of the same coin, engineered to keep power in the same small group of hands. We examine the counter-movements pushing back.</p></li><li><p><strong>You Are Already Being Watched.</strong> Meta&#8217;s Ray-Ban glasses are routing intimate footage to overseas contractors. Flock Safety&#8217;s 90,000 cameras are performing 20 billion licence plate scans a month. We map the surveillance infrastructure being assembled, one procurement decision at a time.</p></li><li><p><strong>Tech Futures.</strong> Our collaboration with RAIN examines how Big Tech borrows from the fossil fuels industry playbook: softening the language around extraction, and embedding itself in the policy spaces meant to govern it.</p></li><li><p><strong>AI Policy Corner.</strong> In partnership with GRAIL at Purdue University, we look at how Anthropic constructs the boundaries of model behaviour not in one document but across an entire governance stack, and what that reveals about corporate AI governance more broadly.</p></li></ul><h4><strong>What Connects These Stories:</strong> </h4><p>All four pieces in this edition ask the same question: who decides what the infrastructure of AI looks like, and who bears the cost?</p><p>In Silicon Valley, a small group of founders and investors decided the trajectory was inevitable and democratic deliberation a distraction. Civic communities around the world are deciding otherwise, blocking data centres, cancelling surveillance contracts, and building technology that serves their own people on their own terms. Wearable devices, licence plate networks, and Ring doorbell cameras were each approved, launched, and defended individually. Together they constitute something no single procurement decision had to account for.</p><p>Tech Futures and AI Policy Corner extend the inquiry. Tech Futures traces how Big Tech softens the language around extraction and embeds itself in the policy spaces meant to govern it. AI Policy Corner shows how model behaviour is bounded not in one document but across an entire governance stack, each layer shaping what accountability means in practice.</p><p>The infrastructure of AI is being built, governed, and contested simultaneously. The people doing the building and the people bearing the consequences are rarely the same.</p><div><hr></div><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!bBvf!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!bBvf!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 424w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 848w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1272w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp" width="1456" height="137" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:137,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1704,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:&quot;&quot;,&quot;type&quot;:&quot;image/webp&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://brief.montrealethics.ai/i/159128290?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!bBvf!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 424w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 848w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1272w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><h3><strong>The Inevitability Myth</strong></h3><p>Every CEO in Silicon Valley needs you to believe one thing: that AI is inevitable. Not useful. Not necessary. Inevitable, as in resistance is futile, adaptation is the only rational response, and anyone questioning the direction of travel is naive or obstructionist.</p><p>This is not analysis. It is a business strategy.</p><p>The <a href="https://www.youtube.com/watch?v=zwnVUiwObl8">zero-human company</a> is where this narrative has been heading all along. What Sam Altman floated as a thought experiment in early 2024, <a href="https://fortune.com/2024/02/04/sam-altman-one-person-unicorn-silicon-valley-founder-myth/">a unicorn run by one person and AI agents</a>, has since become a serious planning horizon. The discourse has moved from &#8220;could AI run a billion-dollar company?&#8221; to &#8220;who will be first?&#8221; Software engineering, the profession most immediately exposed to this shift, is watching from the inside as economic incentives push toward a future where productivity gains accrue to a vanishingly small number of owners and costs are distributed broadly and unevenly.</p><p>It was precisely this trajectory that <a href="https://karendhao.com/">Karen Hao</a> and <a href="https://www.dair-institute.org/team/timnit-gebru/">Timnit Gebru</a> were pushing back against at <a href="https://schedule.sxsw.com/2026/events/PP1148490">SXSW</a> this month, joining John Palfrey, President of the John D. and Catherine T. MacArthur Foundation, for a conversation on <a href="https://www.youtube.com/watch?v=I1tJnM81NCM">reclaiming our humanity in the age of AI.</a> The exchange was clarifying in ways the usual discourse rarely is.</p><div id="youtube2-I1tJnM81NCM" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;I1tJnM81NCM&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/I1tJnM81NCM?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p>For Hao, the utopian and dystopian AI narratives are two sides of the same coin, both engineered by the same small group of people in Silicon Valley to reach the same conclusion: that only they should control this technology. Build the machine god or risk the machine devil, but either way, leave it to us. The panacea and the doomsday scenario are not opposing views. They are coordinated myth-making, designed to foreclose democratic deliberation before it begins.</p><p>Gebru, founder of the <a href="https://www.dair-institute.org/">Distributed AI Research Institute</a>, put it plainly: the field moved toward AGI not because of scientific consensus, but because of ideology, something closer to a secular religion, where true believers justify extraordinary resource extraction and labour exploitation in the name of building either heaven or preventing hell. When she was at Google, the question was never &#8220;what problem are we solving?&#8221; It was &#8220;why aren&#8217;t we building the biggest model?&#8221; </p><p>That foreclosure of democratic deliberation is not hypothetical. In Canada, an investigation by Linda Solomon Wood at <a href="https://www.nationalobserver.com/2025/06/18/investigations/tea-party-misinformation-campaign-kiclei-targeted-muncipalities-no-warning">Canada's National Observer</a>, documented in <a href="https://montrealethics.ai/state-of-ai-ethics-report-volume-7/#disinfo-campaigns">SAIER Vol. 7</a>, revealed that a group called KICLEI was using an AI chatbot to flood municipal councils with climate misinformation, generating letters that appeared to come from legitimate environmental organizations while promoting anti-climate messaging. Councillors in multiple provinces were quoting the same AI-generated talking points verbatim, without knowing it. The tool built to detect this, <a href="https://civicsearchlight.nationalobserver.com/">Civic Searchlight</a>, now offers a glimpse of what AI-enabled democratic manipulation looks like at scale, and what it takes to counter it.</p><p>But here is what the inevitability narrative most needs to suppress: the existence of choice.</p><p>People around the world are already making different ones. When <a href="https://www.brookings.edu/articles/hollywood-writers-went-on-strike-to-protect-their-livelihoods-from-generative-ai-their-remarkable-victory-matters-for-all-workers/">Hollywood writers went on strike in 2023</a>, they refused to cede creative work to AI systems, and they won. When residents in <a href="https://www.nytimes.com/2025/10/20/technology/ai-data-center-backlash-mexico-ireland.html?mod=djemCIO">Ennis, Ireland filed legal challenges</a> to block a four-billion-euro data centre planned for local farmland, and <a href="https://www.nytimes.com/2025/10/20/technology/ai-data-center-backlash-mexico-ireland.html?mod=djemCIO">activists in Santiago, Chile successfully pushed Google to withdraw plans</a> that would have depleted precious aquifers, they did so having concluded that the infrastructure of AI development was not compatible with their own communities. <a href="https://www.masakhane.io/">Masakhane, a grassroots research collective,</a> has spent years building natural language processing tools for African languages, by Africans, rejecting the premise that one model trained on one set of values should serve everyone. <a href="https://www.wired.com/story/maori-language-tech/">Te Hiku Media in New Zealand</a> built their own M&#257;ori language models and refused to license the data to U.S. firm Lionbridge, stating plainly: <a href="https://tehiku.nz/te-hiku-tv/haukainga/8037/indigenous-data-theft">&#8220;They suppressed our languages and physically beat it out of our grandparents, and now they want to sell our language back to us as a service.&#8221;</a></p><p>None of these required waiting for permission from anyone building a machine god.</p><p>The <a href="https://weandai.org/resisting-refusing-reclaiming-reimagining-ai/">We and AI</a> collective recently published a framework, <a href="https://weandai.org/resisting-refusing-reclaiming-reimagining-ai/">&#8220;Resisting, Refusing, Reclaiming, Reimagining AI,&#8221;</a> that maps exactly these kinds of responses. Resisting means rejecting the ideological scaffolding that makes AI expansion appear natural. Refusing means drawing specific red lines around tools and systems that cause harm. Reclaiming means wresting governance back from the engineers and investors who have captured it. Reimagining means building alternative visions of technology designed around care, collective wellbeing, and planetary health rather than extraction and scale. The framework matters because it names something the inevitability narrative actively suppresses: there are many ways to push back, and none of them require you to be against technology.</p><p>What is actually inevitable, if the current trajectory holds, is a narrowing. Of who benefits, of who governs, of what questions are considered worth asking. That is not a law of nature. It is a set of choices, made by specific people, for specific reasons, that can be contested by everyone else.</p><p>The future of AI is not decided. The inevitability narrative needs you to forget that.</p><div><hr></div><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!bBvf!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!bBvf!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 424w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 848w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1272w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp" width="1456" height="137" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:137,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1704,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:&quot;&quot;,&quot;type&quot;:&quot;image/webp&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://brief.montrealethics.ai/i/159128290?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!bBvf!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 424w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 848w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1272w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><h3><strong>You Are Already Being Watched</strong></h3><p>In late February, <a href="https://www.svd.se/a/K8nrV4/metas-ai-smart-glasses-and-data-privacy-concerns-workers-say-we-see-everything">a Swedish investigation</a> found that data annotators working for Sama, a Nairobi-based subcontractor for Meta, had been reviewing footage captured by Ray-Ban Meta smart glasses, including videos of people undressing, sexual acts, and bank card numbers recorded without the subjects&#8217; knowledge. </p><p>One worker described the content directly: </p><blockquote><p><em>&#8220;We see everything &#8211; from living rooms to naked bodies. Meta has that type of content in its databases. People can record themselves in the wrong way and not even know what they are recording. They are real people like you and me.&#8221;</em></p></blockquote><p><a href="https://www.engadget.com/social-media/meta-hit-with-a-class-action-lawsuit-over-smart-glasses-privacy-claims-182846817.html?guccounter=1&amp;guce_referrer=aHR0cHM6Ly93d3cucGVycGxleGl0eS5haS8&amp;guce_referrer_sig=AQAAAMJU3qXw5Lh-k8t77KHDnfGtMYJ3Zavry3iSFae7nr7eMi702WtOTuP_OPUj68lsuNaym2Z5u6R75EJdvMjmkD-U_9ndpREBZQuYH0_B5iHzb2RIRJ9XA-JD4i88HD8wpe8dLoyFHO547cPQKciJjBM6oM5xNJjzAU4pIBRLUh6D">A federal class action lawsuit</a> followed within days, alleging Meta had misrepresented how footage from the glasses was handled. Meta had marketed the glasses with a straightforward promise: <strong>&#8220;designed for privacy, controlled by you.&#8221;</strong></p><p>The glasses are designed to look indistinguishable from regular eyewear. The only indicator of active recording is a small LED light on the right frame, a light that <a href="https://www.404media.co/how-to-disable-meta-rayban-led-light/">hobbyists are selling modification kits to disable for around $60.</a> When New York Times reporter <a href="https://www.youtube.com/watch?v=FOOSpnwg0YU&amp;t=1s">Brian Chen wore the glasses in public,</a> shooting 200 photos and videos on trains, hiking trails, and in parks, nobody noticed. Why would they? It is rude to stare at a stranger&#8217;s glasses. The social norms that allow us to coexist in shared spaces were not designed for ambient, invisible surveillance. </p><p>They were also not designed for a world where <a href="https://www.forbes.com/sites/lindseychoo/2024/10/04/meta-ray-bans-ai-privacy-surveillance/">two Harvard students could pair the same glasses with a facial recognition service</a> and people-search sites to instantly surface a stranger&#8217;s home address, phone number, and family members, in real time, just by looking at them. In April 2025, Meta updated its privacy policy <a href="https://www.theverge.com/news/658602/meta-ray-ban-privacy-policy-ai-training-voice-recordings">to remove the option for users to prevent voice recordings from being stored,</a> making always-on AI features the default.</p><p>The glasses are one piece of a much larger picture. <a href="https://www.flocksafety.com/">Flock Safety,</a> a private company founded in 2017 in Atlanta, Georgia, operates what it describes as the <strong>largest public-private safety network.</strong> As of mid-2025, that network comprised between <a href="https://www.tedlaw.com/cities-fight-flock-safety-license-plate-cameras/">80,000-90,000 cameras</a> across <a href="https://www.flocksafety.com/products/national-lpr-network">49 states, performing over 20 billion licence plate scans</a> every month, with contracts covering more than <a href="https://www.flocksafety.com/products/national-lpr-network">5,000 communities and 4,800 law enforcement agencies.</a> Every vehicle that passes a Flock camera is photographed, its plate logged with a timestamp and location, fed into a searchable national database. </p><p>The <a href="https://www.eff.org/deeplinks/2025/11/how-cops-are-using-flock-safetys-alpr-network-surveil-protesters-and-activists">EFF obtained records representing over 12 million searches</a> by more than 3,900 agencies between late 2024 and late 2025, and the patterns were unambiguous: the system had been used to track protesters at political demonstrations, including the <a href="https://www.npr.org/2025/02/16/nx-s1-5297117/50501-movement-presidents-day-protests-explainer">50501 protests</a> in February 2025, the <a href="https://www.theguardian.com/us-news/2025/apr/05/anti-trump-protests-hands-off">Hands Off protests</a> in April 2025, and the <a href="https://www.bbc.com/news/articles/c93xgyp1zv4o">No Kings protests</a> in June and October 2025. In Washington state, a <a href="https://www.khq.com/news/uw-study-claims-u-s-border-patrol-found-back-door-to-local-flock-safety-networks/article_8f6cf6e1-c873-49ee-bc8d-7e605002ee12.html">University of Washington study</a> found that eight law enforcement agencies had shared their Flock networks directly with US Border Patrol, while border patrol had accessed the data of at least ten other departments without those departments' permission.</p><p>At least <a href="https://www.npr.org/2026/02/17/nx-s1-5612825/flock-contracts-canceled-immigration-survillance-concerns">30 municipalities have cancelled or deactivated</a> their Flock contracts since the start of 2025, many citing fears that local surveillance data was feeding a federal deportation infrastructure. <a href="https://augustafreepress.com/news/staunton-to-cancel-flock-safety-camera-contract-doesnt-agree-with-ideology-of-ceo/">Flock's CEO described the opposition</a> as coming from &#8220;the same activist groups who want to defund the police, weaken public safety, and normalize lawlessness.&#8221; <a href="https://augustafreepress.com/news/staunton-to-cancel-flock-safety-camera-contract-doesnt-agree-with-ideology-of-ceo/">The police chief of Staunton, Virginia wrote back to disagree.</a> &#8220;These citizens have been exercising their rights to receive answers from me, my staff, and city officials, to include our elected leaders," he said. "In short, it is democracy in action.&#8221;</p><p>That democracy has had consequences. In February 2026, <a href="https://www.theverge.com/news/878447/ring-flock-partnership-canceled">Amazon cancelled a planned integration</a> between its Ring doorbell cameras and Flock Safety's network, following intense public backlash over fears that neighbourhood surveillance footage would feed directly into federal immigration enforcement. <a href="https://dailycommercials.com/rings-search-party-super-bowl-ad-backlash-privacy-fears-and-controversy/">Ring had aired a Super Bowl ad </a>showing dozens of cameras scanning a neighbourhood street. The ad was meant to promote a lost-dog feature. The public saw something else.</p><p>Taken separately, each of these stories has a familiar shape: a privacy scandal, a police technology controversy. Taken together, they describe something more coherent: a surveillance infrastructure being assembled through consumer products and law enforcement contracts, without any public deliberation about whether this is the world we want to live in.</p><p>The data flowing through these systems is not neutral. It is used to track movement, infer intent, and extract value from individual behaviour. Wearable devices capture intimate footage routed to overseas contractors. Licence plate networks are queried to monitor protesters and identify immigrants.</p><p>None of this required a single authoritarian decree. It emerged from thousands of procurement decisions, product launches, and privacy policy updates, each individually defensible, collectively constituting a surveillance state. It just was not called that when it was being built.</p><div><hr></div><p>Please share your thoughts with the MAIEI community:</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://brief.montrealethics.ai/p/the-ai-ethics-brief-187-the-myth/comments&quot;,&quot;text&quot;:&quot;Leave a comment&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://brief.montrealethics.ai/p/the-ai-ethics-brief-187-the-myth/comments"><span>Leave a comment</span></a></p><div><hr></div><h3><strong>&#128173; Insights &amp; Perspectives:</strong></h3><div><hr></div><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!bBvf!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!bBvf!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 424w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 848w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1272w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp" width="1456" height="137" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:137,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1704,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:&quot;&quot;,&quot;type&quot;:&quot;image/webp&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://brief.montrealethics.ai/i/159128290?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!bBvf!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 424w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 848w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1272w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><h4><strong>Tech Futures: The Fossil Fuels Playbook for Big Tech: Part II</strong></h4><p>This article is part of our <em><a href="https://montrealethics.ai/category/tech-futures/">Tech Futures</a></em> series, a collaboration between the Montreal AI Ethics Institute (MAIEI) and the Responsible Artificial Intelligence Network (RAIN). In this special instalment of <em>Tech Futures</em> by RAIN, we describe two ways in which the Big Tech industry draws on the playbook of the fossil fuels industry.</p><blockquote><p><em>To dive deeper, <strong><a href="https://montrealethics.ai/tech-futures-the-fossil-fuels-playbook-for-big-tech-part-ii/">read the full article here.</a></strong></em></p></blockquote><h4><strong>AI Policy Corner: Layered Governance in AI Labs: Defining Boundaries Across the Policy Stack</strong></h4><p>This edition of our <a href="https://montrealethics.ai/category/insights/ai-policy-corner/">AI Policy Corner, </a>produced in partnership with the <a href="https://cla.purdue.edu/academic/polsci/research/labs/grail/index.html">Governance and Responsible AI Lab (GRAIL)</a> at Purdue University, uses Anthropic&#8217;s Claude Constitution, Responsible Scaling Policy, and Claude Sonnet 4.6 System Card as a representative example to look at how layered corporate AI policy documents come together to define and shape the boundaries of model behavior across a governance stack.</p><blockquote><p><em>To dive deeper, <strong><a href="https://montrealethics.ai/ai-policy-corner-layered-governance-in-ai-labs-defining-boundaries-across-the-policy-stack/">read the full article here</a></strong><a href="https://montrealethics.ai/ai-policy-corner-u-s-executive-order-on-advancing-ai-education-for-american-youth/https://montrealethics.ai/ai-policy-corner-layered-governance-in-ai-labs-defining-boundaries-across-the-policy-stack/">.</a></em></p></blockquote><div><hr></div><h3><strong>&#10084;&#65039; Support Our Work</strong></h3><div><hr></div><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!8eoK!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fad99e82f-37a2-44ce-bd43-5271842d1acf_1456x213.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!8eoK!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fad99e82f-37a2-44ce-bd43-5271842d1acf_1456x213.png 424w, https://substackcdn.com/image/fetch/$s_!8eoK!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fad99e82f-37a2-44ce-bd43-5271842d1acf_1456x213.png 848w, https://substackcdn.com/image/fetch/$s_!8eoK!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fad99e82f-37a2-44ce-bd43-5271842d1acf_1456x213.png 1272w, https://substackcdn.com/image/fetch/$s_!8eoK!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fad99e82f-37a2-44ce-bd43-5271842d1acf_1456x213.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!8eoK!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fad99e82f-37a2-44ce-bd43-5271842d1acf_1456x213.png" width="1456" height="213" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/ad99e82f-37a2-44ce-bd43-5271842d1acf_1456x213.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:213,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:33222,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://brief.montrealethics.ai/i/192569042?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fad99e82f-37a2-44ce-bd43-5271842d1acf_1456x213.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!8eoK!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fad99e82f-37a2-44ce-bd43-5271842d1acf_1456x213.png 424w, https://substackcdn.com/image/fetch/$s_!8eoK!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fad99e82f-37a2-44ce-bd43-5271842d1acf_1456x213.png 848w, https://substackcdn.com/image/fetch/$s_!8eoK!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fad99e82f-37a2-44ce-bd43-5271842d1acf_1456x213.png 1272w, https://substackcdn.com/image/fetch/$s_!8eoK!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fad99e82f-37a2-44ce-bd43-5271842d1acf_1456x213.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p>Help us keep <em><strong>The AI Ethics Brief</strong></em> free and accessible for everyone by becoming a paid subscriber on Substack or making a donation at <strong><a href="https://montrealethics.ai/donate">montrealethics.ai/donate</a>. </strong>Your support sustains our mission of democratizing AI ethics literacy and honours <strong><a href="https://montrealethics.ai/open-letter-moving-forward-together/">Abhishek Gupta&#8217;s</a></strong> legacy.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://brief.montrealethics.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://brief.montrealethics.ai/subscribe?"><span>Subscribe now</span></a></p><p>For corporate partnerships or larger contributions, please contact us at <strong><a href="mailto:support@montrealethics.ai">support@montrealethics.ai</a></strong></p><div><hr></div><h3>&#9989; Take Action:</h3><div><hr></div><p>Have an article, research paper, or news item we should feature? <strong>Leave us a comment below</strong> &#8212; we&#8217;d love to hear from you!</p>]]></content:encoded></item><item><title><![CDATA[The AI Ethics Brief #186: Sovereign by Design. Accountable by Whose Standard?]]></title><description><![CDATA[Canada's sovereignty moment, Anthropic's values test, and the accountability frameworks we still need to build.]]></description><link>https://brief.montrealethics.ai/p/the-ai-ethics-brief-186-sovereign</link><guid isPermaLink="false">https://brief.montrealethics.ai/p/the-ai-ethics-brief-186-sovereign</guid><dc:creator><![CDATA[Montreal AI Ethics Institute]]></dc:creator><pubDate>Tue, 17 Mar 2026 14:02:36 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!QKqN!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe40b0429-bb74-4f95-b1b2-a45112a308d0_1280x816.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>Welcome to <strong>The AI Ethics Brief</strong>, a bi-weekly publication by the Montreal AI Ethics Institute. We publish <strong>every other Tuesday at 10 AM ET.</strong> Follow MAIEI on <a href="https://bsky.app/profile/mtlaiethics.bsky.social">Bluesky</a> and <a href="https://www.linkedin.com/company/mtlaiethics">LinkedIn.</a> </em></p><p>&#10084;&#65039; <a href="https://montrealethics.ai/donate/">Support Our Work.</a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://brief.montrealethics.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://brief.montrealethics.ai/subscribe?"><span>Subscribe now</span></a></p><div><hr></div><h3><strong>&#128204; Editor&#8217;s Note</strong></h3><div><hr></div><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!QKqN!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe40b0429-bb74-4f95-b1b2-a45112a308d0_1280x816.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!QKqN!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe40b0429-bb74-4f95-b1b2-a45112a308d0_1280x816.png 424w, https://substackcdn.com/image/fetch/$s_!QKqN!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe40b0429-bb74-4f95-b1b2-a45112a308d0_1280x816.png 848w, https://substackcdn.com/image/fetch/$s_!QKqN!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe40b0429-bb74-4f95-b1b2-a45112a308d0_1280x816.png 1272w, https://substackcdn.com/image/fetch/$s_!QKqN!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe40b0429-bb74-4f95-b1b2-a45112a308d0_1280x816.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!QKqN!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe40b0429-bb74-4f95-b1b2-a45112a308d0_1280x816.png" width="1280" height="816" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/e40b0429-bb74-4f95-b1b2-a45112a308d0_1280x816.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:816,&quot;width&quot;:1280,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2524700,&quot;alt&quot;:&quot;White silhouettes of a UAV, drones, and missiles with red shadows atop a vibrant and chaotic watercolor background. Circuit and target graphics suggest open and hit targets. &quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://brief.montrealethics.ai/i/191138405?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe40b0429-bb74-4f95-b1b2-a45112a308d0_1280x816.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="White silhouettes of a UAV, drones, and missiles with red shadows atop a vibrant and chaotic watercolor background. Circuit and target graphics suggest open and hit targets. " title="White silhouettes of a UAV, drones, and missiles with red shadows atop a vibrant and chaotic watercolor background. Circuit and target graphics suggest open and hit targets. " srcset="https://substackcdn.com/image/fetch/$s_!QKqN!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe40b0429-bb74-4f95-b1b2-a45112a308d0_1280x816.png 424w, https://substackcdn.com/image/fetch/$s_!QKqN!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe40b0429-bb74-4f95-b1b2-a45112a308d0_1280x816.png 848w, https://substackcdn.com/image/fetch/$s_!QKqN!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe40b0429-bb74-4f95-b1b2-a45112a308d0_1280x816.png 1272w, https://substackcdn.com/image/fetch/$s_!QKqN!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe40b0429-bb74-4f95-b1b2-a45112a308d0_1280x816.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Credit: AI Kill Chain by Kathryn Conrad / <a href="https://betterimagesofai.org/images?artist=KathrynConrad&amp;title=AIKillChain">Better Images of AI</a> / <a href="https://creativecommons.org/licenses/by/4.0/">CC BY 4.0</a></figcaption></figure></div><h4><strong>In this Edition (TL;DR)</strong></h4><ul><li><p><strong>Sovereign by Design. But Whose Design? </strong>Canada is building an AI sovereignty strategy at the infrastructure layer. Tumbler Ridge revealed we have no equivalent framework at the accountability layer. We examine what that gap costs, and what Canada should demand to close it.</p></li><li><p><strong>When Principles Meet Power: </strong>The US Government and Anthropic&#8217;s ideological clash over a defense contract raises a question that will not go away: when AI companies sign contracts with governments, whose values govern the outcome?</p></li><li><p><strong>Tech Futures:</strong> Our collaboration with RAIN examines how Big Tech is borrowing from the fossil fuels industry playbook, shaping research and owning fiction to control how AI is perceived.</p></li><li><p><strong>AI Policy Corner: </strong>In partnership with GRAIL at Purdue University, we analyze Illinois Public Act 103-0804 and what it means when a government draws a clear line between algorithmic convenience and civil rights accountability.</p></li></ul><h4><strong>What connects these stories:</strong> </h4><p>Each piece in this edition asks the same underlying question: who decides? In Tumbler Ridge, a foreign company decided what constituted a credible threat to Canadian children. In the Anthropic dispute, a government tried to override the values a company had built into its own system. In Big Tech&#8217;s research funding and filmmaking investments, industry decided what stories get told about AI. In Illinois, the legislature decided that employers cannot outsource civil rights responsibility to an algorithm.</p><p>Across all four, the pattern is the same. Technological capability is advancing within governance frameworks that remain uneven, contested, and frequently outpaced. The question is not whether those frameworks will be built. It is who builds them, and in whose interest.</p><div><hr></div><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!bBvf!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!bBvf!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 424w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 848w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1272w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp" width="1456" height="137" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:137,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1704,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:&quot;&quot;,&quot;type&quot;:&quot;image/webp&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://brief.montrealethics.ai/i/159128290?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!bBvf!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 424w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 848w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1272w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><h3><strong>Sovereign by Design. But Whose Design?</strong></h3><p>Canada is having a sovereignty moment. The Munk School&#8217;s <em><a href="https://aicompetitiveness.ca/">Sovereign by Design</a></em> report, released this month, makes the case clearly: AI sovereignty means freedom from coercion, not digital isolationism. It maps our vulnerabilities layer by layer &#8212; cloud infrastructure, compute hardware, foundation models &#8212; and charts a path toward strategic autonomy. It is serious, rigorous work, and it arrives at exactly the right moment.</p><p>But while Ottawa debates juridical cloud sovereignty and the CUSMA review, a different kind of sovereignty failure played out in British Columbia. In June 2025, <a href="https://www.cbc.ca/news/canada/british-columbia/openai-tumbler-ridge-shooter-ban-9.7100497">OpenAI banned Jesse Van Rootselaar&#8217;s ChatGPT account</a> after it was flagged for troubling content, including scenarios of gun violence. The company determined internally that the activity did not meet the &#8220;higher threshold required&#8221; to refer it to law enforcement. Months later, on February 10, 2026, Van Rootselaar killed eight people at a Tumbler Ridge school. Maya Gebala, 12 years old, <a href="https://www.cbc.ca/news/canada/british-columbia/maya-gebala-update-9.7118653">remains hospitalized.</a></p><p>When <a href="https://www.cbc.ca/news/politics/open-ai-government-meeting-tumbler-ridge-9.7104789">AI Minister Evan Solomon met with OpenAI officials</a> in February, he left disappointed. &#8220;We expected [OpenAI] to have some concrete proposals,&#8221; he said. &#8220;We did not hear any substantial new safety protocols outside of some changes to their model.&#8221; Public Safety Minister Gary Anandasangaree was blunter: &#8220;Nothing substantial came out of it.&#8221;</p><p>This is what sovereignty failure actually looks like. Not a foreign government invoking the <a href="https://en.wikipedia.org/wiki/CLOUD_Act">CLOUD Act.</a> Not a hyperscaler denying service under geopolitical pressure. It is a California company making a unilateral judgment call about what constitutes a credible threat to Canadian children. And that judgment being wrong in ways that cannot be undone.</p><h4>The Failure <em>After</em> the Flag</h4><p>The uncomfortable truth is that OpenAI&#8217;s detection systems worked. They flagged the account. About a dozen employees debated whether to escalate. They decided not to. The failure was not in the tooling; it was in what came after, because there was no clear, mandated protocol for what &#8220;after&#8221; should look like. A dozen people deliberating with no resolution tells you the company had no defined framework for exactly this scenario. <strong>That is a governance gap, not a company failure.</strong> And it exists across the entire AI industry.</p><p>The <em>Sovereign by Design</em> report defines sovereignty across five dimensions: jurisdictional, operational, technological, societal, and economic. Tumbler Ridge sits uncomfortably across all of them. A foreign company's internal policy determined what happened to information about a planned mass casualty event on Canadian soil. No Canadian law, regulator, or institution had meaningful standing. No domestic alternative existed. No leverage to demand different terms. And the societal dimension the report warns about, AI systems shaping outcomes that Canadian institutions cannot see or counter, was not an abstraction. It was a school hallway in northern British Columbia.</p><h4>Three Things Canada Should Demand</h4><p><strong>First, a mandatory reporting framework.</strong> We have mandatory reporting in healthcare and education because we decided as a society that the duty to warn outweighs privacy concerns in high-stakes contexts. When a doctor or teacher detects credible risk to a child, there is a protocol. Everyone knows what to do. AI platforms that hold intimate conversational data, the kind of data users share with no one else, should operate under a similar standard. Not blanket surveillance. A defined threshold, a tiered escalation pathway, human review, and oversight. Mental health crisis referrals for lower-risk signals. Law enforcement for imminent ones. That framework belongs to Canada to define, not to individual companies deciding ad hoc behind closed doors.</p><p><strong>Second, transparent disclosure.</strong> Every AI company operating in Canada &#8212; not just OpenAI, but every platform collecting conversational data from Canadian users &#8212; must publish its detection and escalation policies. What triggers a flag, what triggers a ban, what triggers a referral. That should be the cost of doing business here. Right now, these are internal decisions made by private companies with no public accountability, applied to Canadian users under no Canadian standard. Tumbler Ridge revealed that gap. The next platform to face this decision should not make it in a vacuum.</p><p><strong>Third, cross-platform coordination.</strong> A ban on one platform should trigger a coordinated review across others and a risk flag to relevant authorities. Van Rootselaar's account was banned from ChatGPT. <a href="https://globalnews.ca/news/11676795/tumbler-ridge-school-shooter-chatgpt-account-flagged-banned-openai/">YouTube and Roblox</a> were also implicated in this case. Each platform responded independently, after the fact. That silo structure is a policy failure with consequences.</p><p>Minister Solomon said all options are on the table. That framing is welcome, but options without timelines are not accountability. The family of Maya Gebala <a href="https://www.cbc.ca/news/canada/british-columbia/openai-sued-tumbler-ridge-victim-9.7121635">filed suit in B.C. Supreme Court last week</a>. A coroner&#8217;s inquest has been announced. These are the mechanisms of accountability after the fact. The harder work is building the mechanisms beforehand.</p><p>Canada&#8217;s AI sovereignty conversation is maturing. The <em>Sovereign by Design</em> report reflects serious thinking about what it means to govern AI in a fracturing international order. But sovereignty is not only about who controls the infrastructure. It is about who decides what happens when that infrastructure is used to plan harm against your people. Right now, that decision belongs to someone else.</p><p>Sovereign by design has to mean more than sovereign compute. It has to mean sovereign accountability, too.</p><div><hr></div><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!bBvf!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!bBvf!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 424w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 848w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1272w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp" width="1456" height="137" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:137,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1704,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:&quot;&quot;,&quot;type&quot;:&quot;image/webp&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://brief.montrealethics.ai/i/159128290?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!bBvf!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 424w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 848w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1272w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><h3><strong>When Principles Meet Power: The Question of Alignment </strong></h3><p>Sovereignty is not only about infrastructure or accountability frameworks. It is also about what happens when the values embedded in AI systems collide with the values of the governments trying to deploy them. Anthropic just found that out the hard way.</p><p>At a basic level, <a href="https://montrealethics.ai/state-of-ai-ethics-report-volume-7/#distinguishing-matters">AI alignment</a> focuses on various techniques and frameworks that aim to match AI technologies' actions with human desires. Typically, AI misalignment refers to when an AI output contradicts the desired outcome by a human, such as the AI-operated boat in the video game <a href="https://openai.com/index/faulty-reward-functions/">Coast Runners</a> optimizing for points via crashing into walls instead of finishing the race as intended. However, in the case of the US Government and Anthropic, the misalignment was ideological.</p><p>The centre of this struggle is the US Government's desire to renegotiate the terms of a <a href="https://www.anthropic.com/news/anthropic-and-the-department-of-defense-to-advance-responsible-ai-in-defense-operations">$200 million defense contract</a> agreed with Anthropic in July 2025. Anthropic stipulated that it wanted nothing to do with autonomous weapons or mass domestic surveillance (it permits the use of Claude for foreign surveillance), and the US Government wanted to change the agreement to allow <a href="https://www.newyorker.com/news/annals-of-inquiry/the-pentagon-went-to-war-with-anthropic-whats-really-at-stake">&#8220;all lawful uses&#8221;</a> of autonomous weapons and surveillance. Anthropic CEO Dario Amodei <a href="https://www.anthropic.com/news/statement-department-of-war">released a statement</a> fundamentally disagreeing with the US Government's demands, leading to Anthropic being designated a <a href="https://www.bbc.com/news/articles/cn5g3z3xe65o">&#8220;supply chain risk.&#8221;</a> Consequently, Anthropic is suing the US Government, with employees from OpenAI and Google <a href="https://techcrunch.com/2026/03/09/openai-and-google-employees-rush-to-anthropics-defense-in-dod-lawsuit/">writing letters of support.</a></p><p>Amid all of this, OpenAI entered into negotiations and secured the contract that Anthropic had been awarded. The rapid pace and timing of the deal raised serious internal concerns, with OpenAI CEO Sam Altman labelling his own company&#8217;s decision <a href="https://www.bbc.co.uk/news/articles/c3rz1nd0egro">&#8220;opportunistic and sloppy&#8221;</a> amid a <a href="https://brief.montrealethics.ai/i/189717363/beyond-the-information-environment-the-governance-problem">surge in the uninstallation rate</a> of the ChatGPT app. Altman claims that the terms of the agreement have been changed to prohibit mass domestic surveillance of Americans. However, the surrounding circumstances of the deal, and <a href="https://techcrunch.com/2026/03/09/openai-and-google-employees-rush-to-anthropics-defense-in-dod-lawsuit/">OpenAI employees&#8217; support of Anthropic,</a> have left the situation in a quagmire.</p><p>This situation makes one thing clear: when AI companies sign contracts with governments, they are not just making business decisions. They are making political ones. Anthropic drew a line &#8212; anchored in <a href="https://brief.montrealethics.ai/i/188161906/claudes-constitution-when-the-intermediary-sets-its-own-terms">Claude&#8217;s Constitution,</a> its internal framework governing how Claude should think and behave &#8212; and the US Government pushed back. That is not a procurement dispute. That is an ideological conflict, and it will not be the last one.</p><p>Anthropic&#8217;s willingness to walk away from a $200 million contract on principle is notable. The real test is whether they hold that line <a href="https://www.bbc.com/news/articles/cq571w5vllxo">when the pressure compounds.</a> When the losses are larger, the contracts more entangled, and the political cost of refusal higher.</p><p>As for OpenAI, the speed with which they moved to secure the contract speaks for itself. The gap between what AI companies say about safety and what they do when there is money on the table is a gap worth watching closely.</p><p>The deeper issue is this: private AI companies are now making decisions with citizen-wide consequences that governments, democratic institutions, and the public have no meaningful role in shaping. That is not a side effect of the AI industry. It is becoming its defining feature.</p><div><hr></div><p>Please share your thoughts with the MAIEI community:</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://brief.montrealethics.ai/p/the-ai-ethics-brief-186-sovereign/comments&quot;,&quot;text&quot;:&quot;Leave a comment&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://brief.montrealethics.ai/p/the-ai-ethics-brief-186-sovereign/comments"><span>Leave a comment</span></a></p><div><hr></div><h3><strong>&#128173; Insights &amp; Perspectives:</strong></h3><div><hr></div><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!bBvf!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!bBvf!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 424w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 848w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1272w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp" width="1456" height="137" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:137,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1704,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:&quot;&quot;,&quot;type&quot;:&quot;image/webp&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://brief.montrealethics.ai/i/159128290?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!bBvf!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 424w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 848w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1272w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><h4><strong>Tech Futures: The Fossil Fuels Playbook for Big Tech: Part I</strong></h4><p>This article is part of our <em><a href="https://montrealethics.ai/category/tech-futures/">Tech Futures</a></em> series, a collaboration between the Montreal AI Ethics Institute (MAIEI) and the Responsible Artificial Intelligence Network (RAIN).  In this special instalment, we consider two ways in which the Big Tech industry draws on the playbook of the fossil fuels industry: shaping research to serve its own interests and funding fiction to control how AI is perceived. The parallels are not coincidental. They are strategic.</p><blockquote><p><em>To dive deeper, <strong><a href="https://montrealethics.ai/tech-futures-the-fossil-fuels-playbook-for-big-tech-part-i/">read the full article here</a></strong><a href="https://montrealethics.ai/tech-futures-dicersity-of-thought-and-experience-the-uns-scientific-panel-on-ai/">.</a></em></p></blockquote><h4><strong>AI Policy Corner:  An Overview of Illinois Public Act 103-0804</strong></h4><p>This edition of our <a href="https://montrealethics.ai/category/insights/ai-policy-corner/">AI Policy Corner, </a>produced in partnership with the <a href="https://cla.purdue.edu/academic/polsci/research/labs/grail/index.html">Governance and Responsible AI Lab (GRAIL)</a> at Purdue University, analyzes the Illinois Public Act 103-0804 and what it means for how AI can be used in employment decisions. The law is a concrete example of what it looks like when a government draws a clear line between algorithmic convenience and civil rights accountability.</p><blockquote><p><em>To dive deeper, <strong><a href="https://montrealethics.ai/ai-policy-corner-an-overview-of-illinois-public-act-103-0804/">read the full article here</a></strong><a href="https://montrealethics.ai/ai-policy-corner-an-overview-of-illinois-public-act-103-0804/">.</a></em></p></blockquote><div><hr></div><h3><strong>&#10084;&#65039; Support Our Work</strong></h3><div><hr></div><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!bzIU!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!bzIU!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png 424w, https://substackcdn.com/image/fetch/$s_!bzIU!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png 848w, https://substackcdn.com/image/fetch/$s_!bzIU!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png 1272w, https://substackcdn.com/image/fetch/$s_!bzIU!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!bzIU!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png" width="1456" height="213" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:213,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://brief.montrealethics.ai/i/156386126?img=https%3A%2F%2Fsubstackcdn.com%2Fimage%2Ffetch%2Ff_auto%2Cq_auto%3Agood%2Cfl_progressive%3Asteep%2Fhttps%253A%252F%252Fsubstack-post-media.s3.amazonaws.com%252Fpublic%252Fimages%252F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!bzIU!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png 424w, https://substackcdn.com/image/fetch/$s_!bzIU!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png 848w, https://substackcdn.com/image/fetch/$s_!bzIU!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png 1272w, https://substackcdn.com/image/fetch/$s_!bzIU!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p>Help us keep <em><strong>The AI Ethics Brief</strong></em> free and accessible for everyone by becoming a paid subscriber on Substack or making a donation at <strong><a href="https://montrealethics.ai/donate">montrealethics.ai/donate</a>. </strong>Your support sustains our mission of democratizing AI ethics literacy and honours <strong><a href="https://montrealethics.ai/open-letter-moving-forward-together/">Abhishek Gupta&#8217;s</a></strong> legacy.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://brief.montrealethics.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://brief.montrealethics.ai/subscribe?"><span>Subscribe now</span></a></p><p>For corporate partnerships or larger contributions, please contact us at <strong><a href="mailto:support@montrealethics.ai">support@montrealethics.ai</a></strong></p><div><hr></div><h3>&#9989; Take Action:</h3><div><hr></div><p>Have an article, research paper, or news item we should feature? <strong>Leave us a comment below</strong> &#8212; we&#8217;d love to hear from you!</p>]]></content:encoded></item><item><title><![CDATA[The AI Ethics Brief #185: When AI Goes to War]]></title><description><![CDATA[Verification collapse, procurement politics, and the governance gap widening beneath both.]]></description><link>https://brief.montrealethics.ai/p/the-ai-ethics-brief-185-when-ai-goes</link><guid isPermaLink="false">https://brief.montrealethics.ai/p/the-ai-ethics-brief-185-when-ai-goes</guid><dc:creator><![CDATA[Montreal AI Ethics Institute]]></dc:creator><pubDate>Tue, 03 Mar 2026 15:03:02 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!rV8H!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6fbc5dfa-b33d-4018-b737-1ef58daad5a8_2560x1707.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>Welcome to <strong>The AI Ethics Brief</strong>, a bi-weekly publication by the Montreal AI Ethics Institute. We publish <strong>every other Tuesday at 10 AM ET.</strong> Follow MAIEI on <a href="https://bsky.app/profile/mtlaiethics.bsky.social">Bluesky</a> and <a href="https://www.linkedin.com/company/mtlaiethics">LinkedIn.</a> </em></p><p>&#10084;&#65039; <a href="https://montrealethics.ai/donate/">Support Our Work.</a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://brief.montrealethics.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://brief.montrealethics.ai/subscribe?"><span>Subscribe now</span></a></p><div><hr></div><h3><strong>&#128204; Editor&#8217;s Note</strong></h3><div><hr></div><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!rV8H!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6fbc5dfa-b33d-4018-b737-1ef58daad5a8_2560x1707.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!rV8H!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6fbc5dfa-b33d-4018-b737-1ef58daad5a8_2560x1707.jpeg 424w, https://substackcdn.com/image/fetch/$s_!rV8H!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6fbc5dfa-b33d-4018-b737-1ef58daad5a8_2560x1707.jpeg 848w, https://substackcdn.com/image/fetch/$s_!rV8H!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6fbc5dfa-b33d-4018-b737-1ef58daad5a8_2560x1707.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!rV8H!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6fbc5dfa-b33d-4018-b737-1ef58daad5a8_2560x1707.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!rV8H!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6fbc5dfa-b33d-4018-b737-1ef58daad5a8_2560x1707.jpeg" width="1456" height="971" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/6fbc5dfa-b33d-4018-b737-1ef58daad5a8_2560x1707.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:971,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1162144,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://brief.montrealethics.ai/i/189717363?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6fbc5dfa-b33d-4018-b737-1ef58daad5a8_2560x1707.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!rV8H!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6fbc5dfa-b33d-4018-b737-1ef58daad5a8_2560x1707.jpeg 424w, https://substackcdn.com/image/fetch/$s_!rV8H!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6fbc5dfa-b33d-4018-b737-1ef58daad5a8_2560x1707.jpeg 848w, https://substackcdn.com/image/fetch/$s_!rV8H!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6fbc5dfa-b33d-4018-b737-1ef58daad5a8_2560x1707.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!rV8H!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6fbc5dfa-b33d-4018-b737-1ef58daad5a8_2560x1707.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Credit: GPU shot etched 5 by Fritzchens Fritz / <a href="https://betterimagesofai.org/images?artist=FritzchensFritz&amp;title=GPUshotetched5">Better Images of AI</a> / <a href="https://creativecommons.org/licenses/by/4.0/">CC BY 4.0</a></figcaption></figure></div><h4><strong>In this Edition (TL;DR)</strong></h4><ul><li><p><strong>When AI Goes to War:</strong> As fabricated conflict footage spreads and AI chatbots fail basic verification, we examine the erosion of the &#8220;verification layer&#8221; and what it means when AI becomes the environment through which war is interpreted. We also analyze how ethical red lines in defense procurement can redirect rather than restrain capability, and why consumer backlash cannot substitute for binding governance in armed conflict.</p></li><li><p><strong>India&#8217;s AI Impact Summit 2026:</strong> Dr. Maya Indira Ganesh from the University of Cambridge reflects on India&#8217;s AI Impact Summit, unpacking the tension between national AI ambition, infrastructural realities, and governance in the Global South.</p></li><li><p><strong>Tech Futures:</strong> Our collaboration with RAIN examines the strain AI-generated code is placing on open source ecosystems, and what this means for the resilience of global digital infrastructure.</p></li><li><p><strong>AI Policy Corner:</strong> In partnership with GRAIL at Purdue University, we analyze Ghana&#8217;s national AI strategy and the governance challenges of deploying AI for development at scale.</p></li></ul><h4><strong>What connects these stories:</strong> </h4><p>Each piece in this edition examines AI as infrastructure operating within larger political and institutional systems. In wartime, it shapes how conflict is interpreted and how capabilities are deployed. In national strategy and global summits, it intersects with questions of development, investment, and power. In open source ecosystems, it tests the durability of the digital foundations on which everything else depends.</p><p>Across contexts, technological capability is advancing within governance systems that remain uneven and contested. As AI becomes infrastructure for conflict, the defining questions are institutional: who authorizes its use, what constraints govern it, and how accountability is enforced.</p><div><hr></div><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!bBvf!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!bBvf!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 424w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 848w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1272w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp" width="1456" height="137" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:137,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1704,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:&quot;&quot;,&quot;type&quot;:&quot;image/webp&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://brief.montrealethics.ai/i/159128290?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!bBvf!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 424w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 848w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1272w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><h3><strong>When AI Goes to War, Truth Is the First Casualty</strong></h3><p>In July 2024, a group of researchers from the Stockholm International Peace Research Institute, the UN Office for Disarmament Affairs, NYU, and the Montreal AI Ethics Institute, including our late co-founder Abhishek Gupta, co-authored a piece in <a href="https://spectrum.ieee.org/ai-missteps-unravel-global-security">IEEE Spectrum</a> with a straightforward warning: the civilian AI community was badly underestimating what its work was becoming infrastructure for. Not just productivity tools or creative assistants, but the material underpinning of conflict, disinformation, and geopolitical power.</p><p>This weekend, that warning arrived.</p><p>The United States and Israel carried out joint military strikes on Iran. Within hours, fabricated footage flooded social media. AI-generated clips, repurposed gaming videos, and fake screenshots of news articles circulated widely. Tens of thousands shared them. AI chatbots, when asked to verify the footage, frequently confirmed it as real. The <a href="https://pressprogress.ca/global-news-ben-mulroney-shared-a-video-of-airstrikes-the-footage-is-actually-from-a-video-game/">interim host of one of Canada&#8217;s flagship political news programs,</a> Ben Mulroney, shared a clip that traced back to a South Korean YouTube gaming channel, footage from a 2013 military-simulation video game called <em>Arma 3</em>.</p><p>This is the convergence those researchers were pointing toward. AI is not just a tool being used in war. It has become the environment in which war is understood, or misunderstood, by the public.</p><h4>The Verification Layer Has Collapsed</h4><p>We documented an earlier version of this pattern in <a href="https://brief.montrealethics.ai/i/166771550/the-dissemination-of-disinformation-in-the-israel-iran-conflict">Brief #168,</a> when AI-generated conflict footage began circulating during prior Israel-Iran escalations last year. The dynamic was already visible: audiences turned to AI chatbots for ground truth, chatbots failed them, and false content acquired a kind of machine-verified legitimacy it had not earned.</p><p>A <a href="https://dfrlab.org/2025/06/24/grok-struggles-with-fact-checking-amid-israel-iran-war/">DFRLab report from that conflict</a> showed Grok oscillating between contradictory assessments of the same footage, hallucinating details, and identifying the same AI-generated airport clip as showing damage in Tel Aviv, Beirut, Gaza, or Tehran, depending on who was asking. There is little evidence that this failure mode has been resolved.</p><p>The Mulroney episode is useful not as an outrage story but as a diagnostic one. Here is someone occupying a journalist&#8217;s seat without the training or institutional accountability of journalism, operating in an information environment with almost no friction. The video looked plausible. It was widely shared. A chatbot might have confirmed it. The institutions that might once have caught it, editorial oversight, platform moderation, and verification culture, were absent or ineffective. The clip remained online. <em>Getting it right,</em> as <a href="https://globalnews.ca/pages/principles-practices/">Global News&#8217; own editorial standards</a> state, <em>is more important than getting it first.</em> That standard did not hold.</p><p>Platforms now function as real-time war correspondents for millions, yet their content integrity systems remain calibrated for engagement rather than for geopolitical stability. When fabricated footage circulates globally within minutes and verification lags behind, public perception can harden before facts stabilize, compressing political decision cycles in already volatile contexts.</p><h4>Beyond the Information Environment: The Governance Problem</h4><p>What makes this moment more than a media literacy story is what was unfolding simultaneously beyond the frame.</p><p>SAIER Vol. 7&#8217;s <a href="https://montrealethics.ai/state-of-ai-ethics-report-volume-7/#a-minute">chapter on military AI</a> documents how corporate red lines, while real, are not binding. When Microsoft cut AI services to <a href="https://www.cbc.ca/news/world/microsoft-gaza-israel-1.7644413">Israeli Intelligence Unit 8200</a> over confirmed use of AI-powered kill-list targeting, Unit 8200 began migrating to AWS. The constraint was redirected rather than held.</p><p>Anthropic <a href="https://www.anthropic.com/news/statement-department-of-war">refused to allow Claude</a> to be used for mass domestic surveillance or fully autonomous weapons. The Pentagon designated the company a <a href="https://www.axios.com/2026/02/27/anthropic-pentagon-supply-chain-risk-claude">supply chain risk,</a> a status previously reserved for foreign adversaries, and <a href="https://www.cnbc.com/2026/02/27/openai-strikes-deal-with-pentagon-hours-after-rival-anthropic-was-blacklisted-by-trump.html">OpenAI stepped in</a>. While the companies articulated different ethical boundaries, the underlying capability did not disappear. The refusal did not halt deployment so much as reroute it. In defense procurement, a boundary set by one supplier can become an opportunity for another.</p><p>In a geopolitical environment defined by strategic competition, capability acquisition is framed as urgency, and ethical hesitation can be recast as liability.</p><p>Consumers reacted as well. Following OpenAI&#8217;s Department of Defense partnership announcement, U.S. <a href="https://techcrunch.com/2026/03/02/chatgpt-uninstalls-surged-by-295-after-dod-deal/">uninstalls of ChatGPT&#8217;s mobile app</a> surged by 295 percent day-over-day on Saturday, February 28, according to Sensor Tower. One-star reviews increased sharply. Downloads fell. During the same period, Anthropic&#8217;s Claude saw significant increases in downloads and briefly overtook ChatGPT in U.S. daily installs, reaching the top position in several app store rankings.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!i5aL!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3507f84a-64a9-4eea-aed7-315b418b4200_1536x865.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!i5aL!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3507f84a-64a9-4eea-aed7-315b418b4200_1536x865.png 424w, https://substackcdn.com/image/fetch/$s_!i5aL!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3507f84a-64a9-4eea-aed7-315b418b4200_1536x865.png 848w, https://substackcdn.com/image/fetch/$s_!i5aL!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3507f84a-64a9-4eea-aed7-315b418b4200_1536x865.png 1272w, https://substackcdn.com/image/fetch/$s_!i5aL!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3507f84a-64a9-4eea-aed7-315b418b4200_1536x865.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!i5aL!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3507f84a-64a9-4eea-aed7-315b418b4200_1536x865.png" width="1456" height="820" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/3507f84a-64a9-4eea-aed7-315b418b4200_1536x865.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:820,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:188211,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://brief.montrealethics.ai/i/189717363?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3507f84a-64a9-4eea-aed7-315b418b4200_1536x865.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!i5aL!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3507f84a-64a9-4eea-aed7-315b418b4200_1536x865.png 424w, https://substackcdn.com/image/fetch/$s_!i5aL!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3507f84a-64a9-4eea-aed7-315b418b4200_1536x865.png 848w, https://substackcdn.com/image/fetch/$s_!i5aL!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3507f84a-64a9-4eea-aed7-315b418b4200_1536x865.png 1272w, https://substackcdn.com/image/fetch/$s_!i5aL!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3507f84a-64a9-4eea-aed7-315b418b4200_1536x865.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Source: <a href="https://techcrunch.com/2026/03/02/chatgpt-uninstalls-surged-by-295-after-dod-deal/">appfigures via TechCrunch</a> </figcaption></figure></div><p>Alignment choices clearly matter to users. But market reactions do not set procurement policy. App store rankings do not define the legal boundaries of AI deployment in armed conflict. Individuals can attempt to reallocate trust, yet the decisive questions about how AI integrates into national security infrastructure remain concentrated in negotiations between companies and states.</p><p><a href="https://www.alistaircroll.com/updates/walk-and-chalk/?ref=alistair-croll-newsletter">Alistair Croll noted</a> that the episode itself would become training data. Future systems will be trained on a historical record in which a company that refused certain military uses was designated a national security risk, while another that accepted moved forward in partnership. Norms are encoded not only in model weights, but in consequences. The pattern of who is rewarded, penalized, or replaced becomes part of the dataset that future systems learn from.</p><h4>A Warning Written in 2024</h4><p>The <a href="https://spectrum.ieee.org/ai-missteps-unravel-global-security">IEEE Spectrum</a> piece called for something that feels more urgent now than when it was written. AI practitioners needed to understand governance, not just code. They needed to engage with policymakers, not just deploy products. The organizations working on these questions were too small, too homogenous, and too disconnected from the populations most at risk.</p><p>That critique has not expired. The question of what AI systems can be used for in war, who they can target, what they can verify, and whose narratives they amplify is currently being shaped through bilateral negotiations between AI companies and defense departments. That is too few actors making decisions with consequences that extend far beyond their immediate contractual partners.</p><p>Verification has broken down at multiple levels, from individual users to media institutions to the AI tools people now consult for clarity. Rapid-response verification protocols, traceability standards for AI-generated conflict media, and structured partnerships between platforms and credible journalism organizations are no longer optional safeguards. They are a necessary public infrastructure.</p><p>The technology continues to advance regardless of who supplies it. The determination of what AI can and cannot do in armed conflict cannot be left to terms-of-service clauses or procurement agreements. It requires binding international frameworks, the kind civil society organizations such as <a href="https://www.stopkillerrobots.org/">Stop Killer Robots</a> and <a href="https://ploughshares.ca/wp-content/uploads/2025/05/KillerRobots10FactsDigitalfix.pdf">Project Ploughshares</a> continue to advocate for, and that binding law has yet to deliver.</p><p>Military and informational applications of AI are progressing faster than the norms meant to govern them. That was the gap Abhishek and his colleagues identified in 2024.</p><p>It has widened.</p><div><hr></div><p>Please share your thoughts with the MAIEI community:</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://brief.montrealethics.ai/p/the-ai-ethics-brief-185-when-ai-goes/comments&quot;,&quot;text&quot;:&quot;Leave a comment&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://brief.montrealethics.ai/p/the-ai-ethics-brief-185-when-ai-goes/comments"><span>Leave a comment</span></a></p><div><hr></div><h3><strong>&#128173; Insights &amp; Perspectives:</strong></h3><div><hr></div><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!bBvf!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!bBvf!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 424w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 848w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1272w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp" width="1456" height="137" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:137,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1704,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:&quot;&quot;,&quot;type&quot;:&quot;image/webp&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://brief.montrealethics.ai/i/159128290?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!bBvf!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 424w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 848w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1272w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><h4><strong>Dreams and Realities in Modi&#8217;s AI Impact Summit</strong></h4><p>In this op-ed written for the Montreal AI Ethics Institute, Dr. Maya Indira Ganesh of the Leverhulme Centre for the Future of Intelligence (LCFI) at the University of Cambridge examines <a href="https://montrealethics.ai/dreams-and-realities-in-modis-ai-impact-summit/">India&#8217;s AI Impact Summit</a> as a microcosm of contemporary AI politics, where narratives of democratisation, innovation, and global leadership collide with infrastructural constraints and sociopolitical fault lines. The piece moves beyond the spectacle of scale to interrogate what large AI summits actually produce: meaningful governance conversations or investment-driven hype. By foregrounding grassroots participation, Global South perspectives on safety, and the material realities of implementation, it questions whether AI ambition can translate into equitable and accountable outcomes.</p><blockquote><p><em>To dive deeper, <strong><a href="https://montrealethics.ai/dreams-and-realities-in-modis-ai-impact-summit/">read the full article here.</a></strong><a href="https://montrealethics.ai/dreams-and-realities-in-modis-ai-impact-summit/"> </a></em></p></blockquote><h4><strong>Tech Futures: The Threat of AI-Generated Code to the World&#8217;s Digital Infrastructure</strong></h4><p>This third instalment of <a href="https://montrealethics.ai/category/tech-futures/">Tech Futures,</a> produced in collaboration with the <a href="https://rain.ngo/">Responsible Artificial Intelligence Network (RAIN),</a> examines how generative AI is reshaping open source ecosystems. The piece explores the rise of low-quality, AI-generated code contributions at scale and the growing burden placed on volunteer maintainers who safeguard critical digital infrastructure. It situates this trend within broader tensions between speed-driven innovation narratives and the slower, care-intensive work of maintenance, arguing that without cultural and structural shifts, the integrity of the world&#8217;s digital infrastructure may be at risk.</p><blockquote><p><em>To dive deeper, <strong><a href="https://montrealethics.ai/tech-futures-the-threat-to-digital-infrastructure/">read the full article here.</a></strong></em></p></blockquote><h4><strong>AI Policy Corner: Analysis of Ghana&#8217;s Strategy for the Integration of AI</strong></h4><p>This edition of our <a href="https://montrealethics.ai/category/insights/ai-policy-corner/">AI Policy Corner, </a>produced in partnership with the <a href="https://cla.purdue.edu/academic/polsci/research/labs/grail/index.html">Governance and Responsible AI Lab (GRAIL)</a> at Purdue University, examines Ghana&#8217;s national strategy for integrating Artificial Intelligence into its development agenda. The piece explores how Ghana aims to leverage AI to advance healthcare, energy infrastructure, and economic growth, while confronting structural challenges such as digital access gaps, gender disparities in technical fields, and governance capacity. It highlights the country&#8217;s proposed Responsible AI Office and its engagement with international governance frameworks, situating Ghana&#8217;s approach within broader debates about equitable AI development in emerging economies.</p><blockquote><p><em>To dive deeper, <strong><a href="https://montrealethics.ai/ai-policy-corner-analysis-of-ghanas-strategy-for-the-integration-of-ai/">read the full article here.</a></strong><a href="https://montrealethics.ai/ai-policy-corner-analysis-of-ghanas-strategy-for-the-integration-of-ai/"> </a></em></p></blockquote><div><hr></div><h3><strong>&#10084;&#65039; Support Our Work</strong></h3><div><hr></div><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!bzIU!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!bzIU!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png 424w, https://substackcdn.com/image/fetch/$s_!bzIU!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png 848w, https://substackcdn.com/image/fetch/$s_!bzIU!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png 1272w, https://substackcdn.com/image/fetch/$s_!bzIU!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!bzIU!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png" width="1456" height="213" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:213,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://brief.montrealethics.ai/i/156386126?img=https%3A%2F%2Fsubstackcdn.com%2Fimage%2Ffetch%2Ff_auto%2Cq_auto%3Agood%2Cfl_progressive%3Asteep%2Fhttps%253A%252F%252Fsubstack-post-media.s3.amazonaws.com%252Fpublic%252Fimages%252F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!bzIU!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png 424w, https://substackcdn.com/image/fetch/$s_!bzIU!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png 848w, https://substackcdn.com/image/fetch/$s_!bzIU!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png 1272w, https://substackcdn.com/image/fetch/$s_!bzIU!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p>Help us keep <em><strong>The AI Ethics Brief</strong></em> free and accessible for everyone by becoming a paid subscriber on Substack or making a donation at <strong><a href="https://montrealethics.ai/donate">montrealethics.ai/donate</a>. </strong>Your support sustains our mission of democratizing AI ethics literacy and honours <strong><a href="https://montrealethics.ai/open-letter-moving-forward-together/">Abhishek Gupta&#8217;s</a></strong> legacy.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://brief.montrealethics.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://brief.montrealethics.ai/subscribe?"><span>Subscribe now</span></a></p><p>For corporate partnerships or larger contributions, please contact us at <strong><a href="mailto:support@montrealethics.ai">support@montrealethics.ai</a></strong></p><div><hr></div><h3>&#9989; Take Action:</h3><div><hr></div><p>Have an article, research paper, or news item we should feature? <strong>Leave us a comment below</strong> &#8212; we&#8217;d love to hear from you!</p>]]></content:encoded></item><item><title><![CDATA[The AI Ethics Brief #184: What Deserves Your Attention]]></title><description><![CDATA[On critical ignoring, Claude's constitution, and why attention is the scarcest resource in AI.]]></description><link>https://brief.montrealethics.ai/p/the-ai-ethics-brief-184-what-deserves</link><guid isPermaLink="false">https://brief.montrealethics.ai/p/the-ai-ethics-brief-184-what-deserves</guid><dc:creator><![CDATA[Montreal AI Ethics Institute]]></dc:creator><pubDate>Tue, 17 Feb 2026 15:02:50 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!o6Vu!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F902c98af-04a5-4dd0-bb57-813fac32e552_1280x905.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>Welcome to <strong>The AI Ethics Brief</strong>, a bi-weekly publication by the Montreal AI Ethics Institute. We publish <strong>every other Tuesday at 10 AM ET.</strong> Follow MAIEI on <a href="https://bsky.app/profile/mtlaiethics.bsky.social">Bluesky</a> and <a href="https://www.linkedin.com/company/mtlaiethics">LinkedIn.</a> </em></p><p>&#10084;&#65039; <a href="https://montrealethics.ai/donate/">Support Our Work.</a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://brief.montrealethics.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://brief.montrealethics.ai/subscribe?"><span>Subscribe now</span></a></p><div><hr></div><h3><strong>&#128204; Editor&#8217;s Note</strong></h3><div><hr></div><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!o6Vu!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F902c98af-04a5-4dd0-bb57-813fac32e552_1280x905.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!o6Vu!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F902c98af-04a5-4dd0-bb57-813fac32e552_1280x905.png 424w, https://substackcdn.com/image/fetch/$s_!o6Vu!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F902c98af-04a5-4dd0-bb57-813fac32e552_1280x905.png 848w, https://substackcdn.com/image/fetch/$s_!o6Vu!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F902c98af-04a5-4dd0-bb57-813fac32e552_1280x905.png 1272w, https://substackcdn.com/image/fetch/$s_!o6Vu!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F902c98af-04a5-4dd0-bb57-813fac32e552_1280x905.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!o6Vu!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F902c98af-04a5-4dd0-bb57-813fac32e552_1280x905.png" width="1280" height="905" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/902c98af-04a5-4dd0-bb57-813fac32e552_1280x905.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:905,&quot;width&quot;:1280,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:332444,&quot;alt&quot;:&quot;A group of vintage figures ride a circular machine, symbolising the endless, repetitive cycle of algorithmic scrolling and shallow interactions.&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://brief.montrealethics.ai/i/188161906?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F902c98af-04a5-4dd0-bb57-813fac32e552_1280x905.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="A group of vintage figures ride a circular machine, symbolising the endless, repetitive cycle of algorithmic scrolling and shallow interactions." title="A group of vintage figures ride a circular machine, symbolising the endless, repetitive cycle of algorithmic scrolling and shallow interactions." srcset="https://substackcdn.com/image/fetch/$s_!o6Vu!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F902c98af-04a5-4dd0-bb57-813fac32e552_1280x905.png 424w, https://substackcdn.com/image/fetch/$s_!o6Vu!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F902c98af-04a5-4dd0-bb57-813fac32e552_1280x905.png 848w, https://substackcdn.com/image/fetch/$s_!o6Vu!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F902c98af-04a5-4dd0-bb57-813fac32e552_1280x905.png 1272w, https://substackcdn.com/image/fetch/$s_!o6Vu!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F902c98af-04a5-4dd0-bb57-813fac32e552_1280x905.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Credit: Infinite Scroll by Nadia Piet &amp; Archival Images of AI + AIxDESIGN / <a href="https://betterimagesofai.org/images?artist=NadiaPiet&amp;title=InfiniteScroll">Better Images of AI</a> / <a href="https://creativecommons.org/licenses/by/4.0/">CC BY 4.0</a></figcaption></figure></div><div><hr></div><h4><strong>In this Edition (TL;DR)</strong></h4><ul><li><p><strong>Attention as a Scarce Resource:</strong> As AI slop floods digital ecosystems, <em>critical ignoring,</em> choosing what not to engage with, emerges as a missing layer of AI literacy. We explore the research, the education gap, and why trusted curation matters more than ever.</p></li><li><p><strong>Claude&#8217;s Constitution:</strong> Anthropic published updated alignment guidelines framed as Claude&#8217;s own values. We unpack what this means for transparency, accountability, and trust when AI systems act as information intermediaries.</p></li><li><p><strong>Tech Futures:</strong> Our collaboration with RAIN examines the UN's Independent International Scientific Panel on AI, arguing that rigorous science, not industry narratives, should shape how we understand AI.</p></li><li><p><strong>AI Policy Corner:</strong> Our collaboration with GRAIL at Purdue University explores how AI policy intersects with labour outcomes in the US, with a particular focus on the Healthy Technology Act of 2025.</p></li></ul><p><strong>What connects these stories:</strong> </p><p>Every piece in this edition asks the same underlying question: who, or what, mediates the relationship between people and the information they act on?</p><p><em>Critical ignoring</em> frames this at the individual level: how people decide what deserves their attention in an environment designed to exploit it. <em>Claude&#8217;s Constitution</em> raises the question at the systems level: what happens when the intermediary performing that triage is an AI whose values are presented as its own? The UN Scientific Panel asks it at the institutional level: whose knowledge and experience should shape how we understand AI globally? And the AI Policy Corner asks it at the regulatory level: how much human judgment do we preserve as AI enters professions built on trust?</p><p>The thread is consistent: as AI becomes more embedded in how information is produced, filtered, and delivered, the question of who holds editorial authority, and how that authority is designed, communicated, and governed, becomes central to every domain it touches.</p><div><hr></div><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!bBvf!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!bBvf!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 424w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 848w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1272w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp" width="1456" height="137" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:137,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1704,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:&quot;&quot;,&quot;type&quot;:&quot;image/webp&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://brief.montrealethics.ai/i/159128290?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!bBvf!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 424w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 848w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1272w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><h3><strong>Attention as a Scarce Resource</strong></h3><p><em>&#8220;Slop&#8221;</em> was named Merriam-Webster&#8217;s <a href="https://www.merriam-webster.com/wordplay/word-of-the-year">2025 Word of the Year</a>, reflecting growing awareness of low-quality, mass-produced content. The term now captures a broader phenomenon: the rapid proliferation of synthetic media at scale. Across publishing, academic research, and platform ecosystems, output volumes are accelerating. Major <a href="https://papercopilot.com/statistics/icml-statistics/">machine learning conferences</a> report record growth in submissions. <a href="https://www.bbc.com/news/articles/c9wx2dz2v44o">Users increasingly navigate feeds</a> where the origin, intent, and reliability of content range from ambiguous to actively misleading.</p><p>Over the past decade, the AI ethics community has also developed frameworks, governance models, and literacy competencies to guide responsible AI development and use. These efforts remain essential. Yet one dimension of AI literacy requires greater attention: the cognitive discipline needed to navigate environments saturated with content designed to capture attention rather than contribute to understanding.</p><p>This is where <em>critical ignoring</em> becomes essential.</p><h4>Defining Critical Ignoring</h4><p><em>Critical ignoring</em> is the ability to choose what to ignore and where to focus limited attention. The concept was introduced by Anastasia Kozyreva, Sam Wineburg, and colleagues at the Max Planck Institute for Human Development and Stanford University in a 2023 paper titled <em><a href="https://doi.org/10.1177/09637214221121570">Critical Ignoring as a Core Competence for Digital Citizens</a>. </em></p><p>It is not disengagement but rather the disciplined allocation of attention. William James expressed the idea succinctly over a century ago: &#8220;The art of being wise is the art of knowing what to overlook.&#8221;</p><p>The framework identifies three strategies:</p><ol><li><p><strong>Self-nudging,</strong> redesigning one&#8217;s digital environment to reduce exposure to distractions and manipulative triggers.</p></li><li><p><strong>Lateral reading,</strong> leaving a source to verify credibility before investing further attention.</p></li><li><p><strong>The do-not-feed-the-trolls heuristic,</strong> recognizing that engagement can amplify bad-faith actors.</p></li></ol><p>These strategies were articulated before generative AI became mainstream. Their relevance has intensified as content production costs approach zero and algorithmic systems reward engagement over discernment.</p><h4>When Volume Becomes Distortion</h4><p>AI-generated slop introduces a structural shift in the information environment. Much of this content is designed to occupy space, capture clicks, and satisfy ranking algorithms. In engagement-driven systems, distribution is often shaped more by responsiveness to algorithmic incentives than by epistemic quality. Whether content is accurate can become secondary to whether it performs.</p><p>As such, context matters. As we explored in <a href="https://brief.montrealethics.ai/i/166771550/the-dissemination-of-disinformation-in-the-israel-iran-conflict">Brief #165</a>, AI-generated content in high-stakes contexts, such as active conflicts or political campaigns, operates differently, filling information vacuums with fabricated narratives that shape public understanding before credible sources can respond. In these environments, indifference to truth becomes a mechanism of distortion. The dynamics of attention and deception are therefore intertwined, existing on a continuum rather than as separate categories.</p><p>This genre of content is marked by <a href="https://arxiv.org/abs/2601.06060">superficial competence, asymmetric effort, and mass producibility.</a> The cumulative effect is a declining signal-to-noise ratio across digital ecosystems. From <a href="https://newsroom.haas.berkeley.edu/how-ai-is-transforming-research-more-papers-less-quality-and-a-strained-review-system/">academic publishing</a> to <a href="https://www.forbes.com/sites/danidiplacido/2026/01/23/youtube-plans-to-fight-against-ai-slop-by-using-ai/">video platforms</a>, output is accelerating while systems for assessing quality struggle to keep pace. Platforms have introduced tools that allow users to <a href="https://www.forbes.com/sites/joetoscano1/2026/02/02/ai-companies-race-to-clean-up-ai-slop-because-model-performance-depends-on-it/">limit certain forms of AI-generated content</a> in their feeds, signalling recognition that quality management has become a governance challenge.</p><p>In such environments, critical thinking alone is insufficient. <a href="https://pmc.ncbi.nlm.nih.gov/articles/PMC11894805/">Research on online reasoning</a> shows that sustained engagement with low-quality material can increase its visibility within algorithmic systems. When attention is monetized and amplified by design, engagement becomes an extractable resource.</p><p>Critical ignoring complements critical thinking by adding a prior step: deciding whether engagement is warranted at all. It operates at the cognitive level, equipping individuals to navigate systems that continue to reward engagement, whether or not that engagement advances understanding.</p><h4>The Missing Layer in AI Literacy</h4><p>This raises a broader educational question. A <a href="https://www.sciencedirect.com/science/article/pii/S2666920X25000451">systematic literature review</a> by <a href="https://www.grail-lab.org/people">Daniel Schiff</a> and colleagues at Purdue University analyzed 25 empirical AI ethics education interventions between 2018 and early 2023.</p><p>The findings are encouraging. AI ethics education has adopted progressive pedagogical approaches, including case studies, group projects, hands-on exercises, and interdisciplinary discussions. Content extends beyond narrow technical compliance to include bias, fairness, privacy, power, social justice, and inequality, reflecting the sociotechnical nature of AI systems. </p><p>Yet, assessment practices lag behind. Only three of the 25 interventions explicitly defined learning objectives. Many programs relied on summative evaluations or research-oriented measurement rather than formative assessment designed to support student learning. The authors warn against valuing what is easily measured instead of measuring what truly matters. </p><p>If AI ethics education is still developing robust ways to assess ethical reasoning, how do we assess the capacity to manage attention responsibly in AI-saturated environments?</p><p>Ethical reasoning presumes that learners have already selected a problem worthy of deliberation. Critical ignoring addresses the prior cognitive task of triage: how individuals decide which information merits sustained reasoning in the first place.</p><p>When individuals cannot reliably perform that triage, they turn to trusted intermediaries to perform it for them.</p><h4>Trust as Cognitive Shortcut</h4><p>As content volume increases, trusted curation becomes more valuable. Reputation functions as a cognitive shortcut, with readers relying on institutional or brand credibility to decide where to allocate attention.</p><p>This reinforces the importance of <strong>human editorial judgment.</strong> The value of a publication lies not only in the information presented but in the decisions about what to include, how to contextualize it, and what to exclude. In algorithmically amplified environments, intentional curation stabilizes attention.</p><p>As AI reduces the marginal cost of producing fluent text, discernment grows in importance. The ability to generate coherent content is not the same as having something worth saying. Editorial intention is what distinguishes analysis from noise.</p><h4>Building Critical Ignoring into AI Literacy</h4><p>The convergence of cognitive research on attention and empirical research on AI ethics education suggests several priorities:</p><ol><li><p><strong>AI literacy frameworks should explicitly incorporate attention management as a metacognitive skill,</strong> not just understanding, evaluating, and reasoning about AI systems, but navigating engagement with them.</p></li><li><p><strong>Assessment practices need to evolve alongside this, </strong>with scenario-based exercises where learners demonstrate the ability to triage information quality, verify sources efficiently, and justify decisions not to engage.</p></li><li><p><strong>Educators and policymakers should recognize that attention is a limited resource with ethical implications: </strong>engagement choices shape algorithmic systems, influence public discourse, and affect the visibility of ideas.</p></li></ol><p><em>Critical ignoring</em> does not imply withdrawal from civic life. It establishes the conditions for meaningful engagement. Individuals who attempt to engage with everything risk dispersing their cognitive resources across content designed to fragment attention. This is a practice exercised at the individual level, within systems that are structurally optimized for the opposite outcome. </p><p>The discipline the AI era demands is not greater exposure to information. It is a wiser allocation of attention.</p><div><hr></div><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!bBvf!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!bBvf!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 424w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 848w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1272w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp" width="1456" height="137" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:137,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1704,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:&quot;&quot;,&quot;type&quot;:&quot;image/webp&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://brief.montrealethics.ai/i/159128290?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!bBvf!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 424w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 848w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1272w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><h3><strong>Claude's Constitution: When the Intermediary Sets Its Own Terms</strong></h3><p>In late January, Anthropic published an <a href="https://www-cdn.anthropic.com/cffd979fd050fbc0d8874b8c58b24cc10554e208/claudes-constitution_webPDF_26-01.26a.pdf">updated set of guidelines</a> for the values and behaviours on which Claude is trained, positioning it as their effort to keep Claude safe for its 'principals' (Anthropic, operators, and users), and safeguard Claude's 'wellbeing.' </p><p>Rather than a prescriptive rulebook, Anthropic opted for a <a href="https://bisi.org.uk/reports/claudes-new-constitution-ai-alignment-ethics-and-the-future-of-model-governance">judgment-oriented approach to alignment</a>: the constitution describes what good judgment looks like and trains the model to generalize from there. Instead of telling Claude what to do in every scenario, you show it how an 'ethical' person would act. The document is written <em>to and for Claude,</em> as if the model had lost all its memory and needed to be reminded of its identity.</p><blockquote><p>For readers seeking a deeper treatment of how <em>safety, alignment, </em>and<em> ethics</em> relate to and diverge from one another, Ren&#233;e Sieber's chapter in <a href="https://montrealethics.ai/state-of-ai-ethics-report-volume-7/#distinguishing-matters">SAIER Volume 7</a> offers a useful framework for disentangling these AI concepts as they evolve across institutional and national contexts.</p></blockquote><p>Claude's actions are measured against <a href="https://www-cdn.anthropic.com/cffd979fd050fbc0d8874b8c58b24cc10554e208/claudes-constitution_webPDF_26-01.26a.pdf">four ethical values</a>: being "broadly safe," "broadly ethical," compliant with Anthropic's guidelines, and "genuinely helpful." When these values conflict, safety takes priority. The constitution also provides a level of transparency that <a href="https://bisi.org.uk/reports/claudes-new-constitution-ai-alignment-ethics-and-the-future-of-model-governance">may reduce adoption risk for EU companies</a> as the EU AI Act's transparency requirements take effect.</p><h4>&#128204; MAIEI&#8217;s Take</h4><p>That transparency is welcome. The constitution provides users with tangible insight into the design thinking behind the model's behaviour, and the judgment-oriented approach enables Claude to respond to novel scenarios with a more applicable framework rather than searching a rulebook for the relevant clause.</p><p>Anthropic&#8217;s approach aligns closely with virtue ethics: rather than determining right or wrong based on universal rules (deontology), it guides the model toward how an &#8220;ethical&#8221; person would act in a given situation. The operative phrase is <em>how they think</em> an ethical person would act. There is no universal &#8220;ethical actions&#8221; dataset that an LLM can train on. The values Claude is optimized toward are, inevitably, the values Anthropic holds, whether or not they are fully intended. This is not a flaw unique to Anthropic, but it is a reality the constitution&#8217;s framing tends to obscure rather than surface.</p><p>The implications extend beyond philosophical framing. If <em>critical ignoring</em> depends on individuals identifying trustworthy sources to perform information triage on their behalf, alignment documents become part of that trust infrastructure. Users increasingly interact with AI systems not just as tools but as intermediaries that filter, summarize, and contextualize information. How those intermediaries are designed and how their design is communicated matter.</p><p>Anthropic&#8217;s use of anthropomorphic language also goes beyond explanatory convenience. Claude is described as &#8220;<a href="https://www-cdn.anthropic.com/cffd979fd050fbc0d8874b8c58b24cc10554e208/claudes-constitution_webPDF_26-01.26a.pdf">a brilliant friend</a>&#8221; and a &#8220;<a href="https://www-cdn.anthropic.com/cffd979fd050fbc0d8874b8c58b24cc10554e208/claudes-constitution_webPDF_26-01.26a.pdf">conscientious objector,</a>&#8221; attributing layers of agency to the model that distance its developers from responsibility for its outputs. Rather than qualifying the question of machine consciousness, the constitution frames it as something that may already exist in some form. When a model's values are presented as its own rather than as design choices of a specific company, users lose a critical layer of context for evaluating what they're being told.</p><p>This is compounded by how the framework is evaluated. Anthropic acknowledges that part of the feedback on the current framework comes from older Claude models. When the system evaluating the alignment framework is itself a product of prior alignment decisions, the feedback loop narrows rather than broadens.</p><p>None of this diminishes the value of the effort. Documents like these are valuable and necessary. The AI ethics community has long called for greater transparency in how models are designed, and Anthropic's constitution is a meaningful contribution. At the same time, presenting alignment as the model's own values rather than the product of specific human decisions risks obscuring the accountability that transparency is meant to enable. For a model environment that truly embodies AI safety, the human touch must be acknowledged and accounted for, not displaced onto the model itself.</p><div><hr></div><p>Did we miss anything? Please share your thoughts with the MAIEI community:</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://brief.montrealethics.ai/p/the-ai-ethics-brief-184-what-deserves/comments&quot;,&quot;text&quot;:&quot;Leave a comment&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://brief.montrealethics.ai/p/the-ai-ethics-brief-184-what-deserves/comments"><span>Leave a comment</span></a></p><div><hr></div><h3><strong>&#128173; Insights &amp; Perspectives:</strong></h3><div><hr></div><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!bBvf!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!bBvf!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 424w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 848w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1272w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp" width="1456" height="137" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:137,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1704,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:&quot;&quot;,&quot;type&quot;:&quot;image/webp&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://brief.montrealethics.ai/i/159128290?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!bBvf!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 424w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 848w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1272w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><h4><strong>Tech Futures: Diversity of Thought and Experience: The UN&#8217;s Scientific Panel on AI</strong></h4><p>This article is part of our <em><a href="https://montrealethics.ai/category/tech-futures/">Tech Futures</a></em> series, a collaboration between the Montreal AI Ethics Institute (MAIEI) and the <a href="https://rain.ngo/">Responsible Artificial Intelligence Network (RAIN). </a>The series challenges mainstream AI narratives, proposing that rigorous research and science are better sources of information about AI than industry leaders. This second instalment of <em>Tech Futures</em> by RAIN celebrates the great potential of the UN&#8217;s Independent International Scientific Panel on AI, and the diversity of its membership.</p><blockquote><p><em>To dive deeper, <strong><a href="https://montrealethics.ai/tech-futures-dicersity-of-thought-and-experience-the-uns-scientific-panel-on-ai/">read the full article here</a></strong><a href="https://montrealethics.ai/tech-futures-dicersity-of-thought-and-experience-the-uns-scientific-panel-on-ai/">.</a></em></p></blockquote><h4><strong>AI Policy Corner: Automating Licensed Professions: Assessing Health Technology and Other Industries</strong></h4><p>This edition of our <a href="https://montrealethics.ai/category/insights/ai-policy-corner/">AI Policy Corner, </a>produced in partnership with the <a href="https://cla.purdue.edu/academic/polsci/research/labs/grail/index.html">Governance and Responsible AI Lab (GRAIL)</a> at Purdue University, explores how AI policy may affect labour outcomes in the United States by analyzing state and federal policies and relevant court opinions. Proposals that promote AI safety and support industry growth are common, but a recurring question is how much human oversight will be required. The <a href="https://www.congress.gov/bill/119th-congress/house-bill/238">Healthy Technology Act of 2025</a> shows how these questions about oversight show up in practice.</p><blockquote><p><em>To dive deeper, <strong><a href="https://montrealethics.ai/ai-policy-corner-automating-licensed-professions-assessing-health-technology-and-other-industries/">read the full article here</a></strong><a href="https://montrealethics.ai/ai-policy-corner-automating-licensed-professions-assessing-health-technology-and-other-industries/">.</a></em></p></blockquote><div><hr></div><h3><strong>&#10084;&#65039; Support Our Work</strong></h3><div><hr></div><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!bzIU!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!bzIU!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png 424w, https://substackcdn.com/image/fetch/$s_!bzIU!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png 848w, https://substackcdn.com/image/fetch/$s_!bzIU!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png 1272w, https://substackcdn.com/image/fetch/$s_!bzIU!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!bzIU!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png" width="1456" height="213" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:213,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://brief.montrealethics.ai/i/156386126?img=https%3A%2F%2Fsubstackcdn.com%2Fimage%2Ffetch%2Ff_auto%2Cq_auto%3Agood%2Cfl_progressive%3Asteep%2Fhttps%253A%252F%252Fsubstack-post-media.s3.amazonaws.com%252Fpublic%252Fimages%252F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!bzIU!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png 424w, https://substackcdn.com/image/fetch/$s_!bzIU!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png 848w, https://substackcdn.com/image/fetch/$s_!bzIU!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png 1272w, https://substackcdn.com/image/fetch/$s_!bzIU!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p>Help us keep <em><strong>The AI Ethics Brief</strong></em> free and accessible for everyone by becoming a paid subscriber on Substack or making a donation at <strong><a href="https://montrealethics.ai/donate">montrealethics.ai/donate</a>. </strong>Your support sustains our mission of democratizing AI ethics literacy and honours <strong><a href="https://montrealethics.ai/open-letter-moving-forward-together/">Abhishek Gupta&#8217;s</a></strong> legacy.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://brief.montrealethics.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://brief.montrealethics.ai/subscribe?"><span>Subscribe now</span></a></p><p>For corporate partnerships or larger contributions, please contact us at <strong><a href="mailto:support@montrealethics.ai">support@montrealethics.ai</a></strong></p><div><hr></div><h3>&#9989; Take Action:</h3><div><hr></div><p>Have an article, research paper, or news item we should feature? <strong>Leave us a comment below</strong> &#8212; we&#8217;d love to hear from you!</p>]]></content:encoded></item><item><title><![CDATA[The AI Ethics Brief #183: Blurred Lines]]></title><description><![CDATA[From health data to root permissions, AI is redrawing the boundaries of access.]]></description><link>https://brief.montrealethics.ai/p/the-ai-ethics-brief-183-blurred-lines</link><guid isPermaLink="false">https://brief.montrealethics.ai/p/the-ai-ethics-brief-183-blurred-lines</guid><dc:creator><![CDATA[Montreal AI Ethics Institute]]></dc:creator><pubDate>Tue, 03 Feb 2026 15:03:04 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!jbrK!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F43f4c584-39a4-4941-a0d1-72d063f90be8_1280x720.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>Welcome to <strong>The AI Ethics Brief</strong>, a bi-weekly publication by the Montreal AI Ethics Institute. We publish <strong>every other Tuesday at 10 AM ET.</strong> Follow MAIEI on <a href="https://bsky.app/profile/mtlaiethics.bsky.social">Bluesky</a> and <a href="https://www.linkedin.com/company/mtlaiethics">LinkedIn.</a> </em></p><p>&#10084;&#65039; <a href="https://montrealethics.ai/donate/">Support Our Work.</a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://brief.montrealethics.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://brief.montrealethics.ai/subscribe?"><span>Subscribe now</span></a></p><div><hr></div><h3><strong>&#128204; Editor&#8217;s Note</strong></h3><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!jbrK!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F43f4c584-39a4-4941-a0d1-72d063f90be8_1280x720.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!jbrK!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F43f4c584-39a4-4941-a0d1-72d063f90be8_1280x720.png 424w, https://substackcdn.com/image/fetch/$s_!jbrK!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F43f4c584-39a4-4941-a0d1-72d063f90be8_1280x720.png 848w, https://substackcdn.com/image/fetch/$s_!jbrK!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F43f4c584-39a4-4941-a0d1-72d063f90be8_1280x720.png 1272w, https://substackcdn.com/image/fetch/$s_!jbrK!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F43f4c584-39a4-4941-a0d1-72d063f90be8_1280x720.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!jbrK!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F43f4c584-39a4-4941-a0d1-72d063f90be8_1280x720.png" width="1280" height="720" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/43f4c584-39a4-4941-a0d1-72d063f90be8_1280x720.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:720,&quot;width&quot;:1280,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1455387,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://brief.montrealethics.ai/i/186593719?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F43f4c584-39a4-4941-a0d1-72d063f90be8_1280x720.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!jbrK!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F43f4c584-39a4-4941-a0d1-72d063f90be8_1280x720.png 424w, https://substackcdn.com/image/fetch/$s_!jbrK!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F43f4c584-39a4-4941-a0d1-72d063f90be8_1280x720.png 848w, https://substackcdn.com/image/fetch/$s_!jbrK!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F43f4c584-39a4-4941-a0d1-72d063f90be8_1280x720.png 1272w, https://substackcdn.com/image/fetch/$s_!jbrK!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F43f4c584-39a4-4941-a0d1-72d063f90be8_1280x720.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Credit: Ambient Scribes by Fanny Maurel &amp; Digit / <a href="https://betterimagesofai.org/images?artist=FannyMaurel&amp;title=AmbientScribes">Better Images of AI</a> / <a href="https://creativecommons.org/licenses/by/4.0/">CC BY 4.0</a></figcaption></figure></div><div><hr></div><h4><strong>In this Edition (TL;DR)</strong></h4><ul><li><p><strong>LLMs in Healthcare:</strong> OpenAI and Anthropic launch healthcare-specific modes, raising questions about the blurred line between consumer and patient data, and who ultimately controls the resulting data flows.</p></li><li><p><strong>OpenClaw and Agentic AI:</strong> A viral open-source assistant validates concerns about AI agents that require root-level permissions across your apps, browsers, and cloud servers.</p></li><li><p><strong>Tech Futures (New Column):</strong> Our inaugural collaboration with RAIN examines how Big Tech positions itself against science and floods education with industry-produced AI content.</p></li><li><p><strong>AI Policy Corner:</strong> Our collaboration with GRAIL at Purdue University examines the Executive Order aiming to standardize US AI policy through litigation task forces, funding restrictions, and federal preemption of state laws.</p></li><li><p><strong>Recess:</strong> In this instalment from Encode Canada, Emma Edney explores AI&#8217;s role in Canadian law schools and proposes Bar Exam reforms to protect the &#8220;people skills&#8221; AI can&#8217;t replicate.</p></li></ul><p><strong>What connects these stories:</strong> </p><p>This edition traces a common thread: the quiet expansion of AI into spaces previously constrained by regulation, professional norms, or simple technical separation. Whether it's LLMs ingesting both fitness tracker data and medical records, agentic assistants demanding root-level access across your digital life, or Big Tech flooding education with self-serving training content, the pattern is consistent. Boundaries are blurring, and the frameworks meant to govern these spaces haven't caught up. The question running through each piece is the same: as AI systems gain access to more of our lives, who decides where the lines are drawn, and in whose interest?</p><div><hr></div><h3><strong>&#128680;Recent Developments: What We&#8217;re Tracking</strong></h3><div><hr></div><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!bBvf!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!bBvf!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 424w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 848w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1272w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp" width="1456" height="137" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:137,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1704,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:&quot;&quot;,&quot;type&quot;:&quot;image/webp&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://brief.montrealethics.ai/i/159128290?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!bBvf!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 424w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 848w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1272w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><h3><strong>LLMs Are Being Formally Introduced Into Personal Healthcare</strong></h3><p>Days apart, both OpenAI and Anthropic released <a href="https://openai.com/index/introducing-chatgpt-health/">ChatGPT Health</a> (January 7) and <a href="https://www.anthropic.com/news/healthcare-life-sciences">Claude for Healthcare</a> (January 11) to their premium subscribers, tools specifically designed for users to ask healthcare-related questions. In both cases, users&#8217; health data is not used to train their foundation models, while both have increased security around healthcare-specific conversations (such as separate memory and context windows). Overall, these modes aim to help users better understand their health in plain language by allowing them to share personal medical records and connect sensitive health data from medical records, fitness trackers, and connected apps. </p><h5><strong>&#128204; Our Position</strong></h5><p>LLM users have been using these systems for medical advice long before healthcare-specific modes existed. <a href="https://driphydration.com/ai-health-survey/">A 2025 study of 2,000 Americans</a> showed that 39% trusted AI medical advice. This was concerning, especially as disclaimers warning users not to trust LLM-derived medical advice <a href="https://www.rollingstone.com/culture/culture-features/ai-chatbot-medical-advice-study-1235399973/">became less prominent</a> over time. Establishing a healthcare-specific mode with enhanced data security is a step in the right direction, helping patients make sense of complex medical records and saving medical practitioners time (e.g., by expediting authorization requests).</p><p>Nevertheless, in <a href="https://brief.montrealethics.ai/i/183655492/recess-your-wrist-your-data-their-access-are-you-trading-convenience-for-control">Brief #181</a>, our <a href="https://montrealethics.ai/your-wrist-your-data-their-access-are-you-trading-convenience-for-control/">Recess</a> piece by Kennedy O&#8217;Neil examined the ethical grey areas of smartwatches, showing that current Canadian privacy legislation does not fully address smartwatch-related privacy breaches. With millions more US users now able to connect their health data to LLMs, these concerns are supercharged.</p><p>This raises a critical distinction: consumer health data (from fitness trackers and wellness apps) is not subject to the same protections as patient data under laws like <a href="https://www.hhs.gov/hipaa/for-professionals/privacy/laws-regulations/index.html">HIPAA</a>. When LLM healthcare modes blur these categories by ingesting both, it remains unclear which regulatory framework applies. It is also unclear whether users are meaningfully consenting when they connect these apps, or whether consent is buried in the terms of service.</p><p>These regulatory ambiguities take on added weight when we consider who ultimately controls this data. In <a href="https://montrealethics.ai/state-of-ai-ethics-report-volume-7/#responsible-stewardship">Chapter 17 of SAIER Vol. 7</a>, Tariq Khan examines Palantir's private sector creep into the UK healthcare system. This move imports the concerns associated with Palantir as a company (see also <a href="https://brief.montrealethics.ai/i/185050594/the-surveillance-infrastructure-is-now-operational">Brief #182: The Surveillance Infrastructure Is Now Operational</a><em>)</em> while also blurring the line between "public infrastructure and private strategy," eroding trust between governments and the public. Ceding further control to private institutions, rather than individual medical institutions, reduces the decision-making authority of both users and medical professionals over patient data. For this reason, Zoya Yasmine, in <a href="https://montrealethics.ai/state-of-ai-ethics-report-volume-7/#patient-advocacy">Chapter 8 of SAIER Vol. 7</a>, highlights the role medical institutions can play in adopting a more critical approach to AI solutions in healthcare.</p><p>Consequently, if LLMs in healthcare are not intended to replace human-led care (as both OpenAI and Anthropic emphasize), patient and medical practitioner involvement must be meaningful. Yasmine notes how practitioners are often left frustrated by <a href="https://montrealethics.ai/state-of-ai-ethics-report-volume-7/#patient-advocacy">&#8220;opaque reasoning and insufficient control.&#8221;</a> Whether these tools genuinely empower patients and practitioners or simply expand the private sector&#8217;s foothold in healthcare remains an open question.</p><div><hr></div><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!bBvf!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!bBvf!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 424w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 848w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1272w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp" width="1456" height="137" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:137,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1704,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:&quot;&quot;,&quot;type&quot;:&quot;image/webp&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://brief.montrealethics.ai/i/159128290?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!bBvf!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 424w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 848w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1272w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><h3><strong>OpenClaw and the Expanding Reach of Agentic AI</strong></h3><p>Over the past week, a viral open-source AI assistant called <a href="https://openclaw.ai/">OpenClaw</a> (formerly Clawdbot, briefly Moltbot) captured the tech world&#8217;s attention. The tool promises something different from typical chatbots: an AI that lives inside your existing messaging apps and can actually do things on your behalf, from scheduling tasks to organizing files to triaging your inbox. For the full story of its chaotic debut, including trademark disputes, crypto scammers, and a lobster mascot that briefly sprouted a human face, <a href="https://www.cnet.com/tech/services-and-software/from-clawdbot-to-moltbot-to-openclaw/">see CNET's coverage</a>.</p><h5><strong>&#128204; Our Position</strong></h5><p>OpenClaw illustrates the tensions we explored in <a href="https://brief.montrealethics.ai/i/167704348/are-we-trading-away-human-agency-in-the-name-of-convenience-and-if-so-how-do-we-recalibrate-our-relationship-with-the-technologies-that-mediate-our-lives">Brief #169: Are we trading away human agency in the name of convenience?</a></p><p>Signal President <a href="https://techcrunch.com/2025/03/07/signal-president-meredith-whittaker-calls-out-agentic-ai-as-having-profound-security-and-privacy-issues/">Meredith Whittaker</a> has previously described agentic AI as &#8220;putting your brain in a jar.&#8221; Consider a simple task: an AI agent looks up concerts, books tickets, adds the event to your calendar, and messages your friends. To do this, it needs access to your browser, credit card, calendar, and messaging apps, with &#8220;something that looks like root permission, accessing every single one of those databases, probably in the clear.&#8221; The result, Whitaker has warned, is a &#8220;magic genie bot&#8221; that breaks the &#8220;blood-brain barrier&#8221; between apps, operating systems, and remote servers, muddying data across services in ways that fundamentally undermine privacy and security.</p><p>Whittaker&#8217;s warning was prescient. OpenClaw is precisely this kind of magic genie bot, and its turbulent debut validated her concerns in real time. The security concerns are not hypothetical: within days of going viral, researchers spotted publicly accessible OpenClaw deployments with little or no authentication, exposing <a href="https://x.com/theonejvo/status/2017732898632437932?s=20">API keys</a>, chat logs, and system access.</p><p>The pattern should look familiar. Just as LLM healthcare modes blur the line between consumer data and protected patient data, OpenClaw collapses distinctions between apps, platforms, and permission structures. A single agent might access your calendar, messages, email, and files under one set of permissions, raising the same question: which regulatory framework governs these data flows?</p><p>Each interaction with an AI assistant generates behavioural data. Each API permission granted expands its reach. OpenClaw&#8217;s turbulent first week offers a preview of what happens when these systems scale without adequate safeguards. </p><p>Preserving meaningful human agency will require us to ask not just &#8220;what can this tool do for me?&#8221; but &#8220;what am I granting access to, and who else might benefit from that access?&#8221; The little lobster that moulted and kept going is charming. The questions it raises about autonomy, consent, and control are not.</p><div><hr></div><p>Did we miss anything? Please share your thoughts with the MAIEI community:</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://brief.montrealethics.ai/p/the-ai-ethics-brief-183-blurred-lines/comments&quot;,&quot;text&quot;:&quot;Leave a comment&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://brief.montrealethics.ai/p/the-ai-ethics-brief-183-blurred-lines/comments"><span>Leave a comment</span></a></p><div><hr></div><h3><strong>&#128173; Insights &amp; Perspectives:</strong></h3><div><hr></div><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!bBvf!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!bBvf!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 424w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 848w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1272w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp" width="1456" height="137" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:137,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1704,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:&quot;&quot;,&quot;type&quot;:&quot;image/webp&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://brief.montrealethics.ai/i/159128290?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!bBvf!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 424w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 848w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1272w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><h4><strong>Tech Futures: Co-opting Research and Education</strong></h4><p>We&#8217;re excited to introduce <a href="https://montrealethics.ai/category/tech-futures/">Tech Futures</a>, a new column developed in collaboration with the <a href="https://www.thersa.org/fellowship/networks-2/responsible-ai-network-rain/">Responsible Artificial Intelligence Network (RAIN)</a>. The series challenges mainstream AI narratives by centering rigorous research over industry claims. In this inaugural instalment, RAIN&#8217;s Ismael Kherroubi Garcia examines how Big Tech has positioned itself against science. From Nvidia's CEO dismissing "PhDs" raising concerns, to AI companies claiming their products operate at "PhD level" or surpass Nobel Prize winners, the framing is deliberate: undermine the credibility of independent research while flooding education with industry-produced AI training content. The UK government's <a href="https://aiskillshub.org.uk/">AI Skills Hub</a>, where 60% of free content comes from tech companies, is a case in point. As the OECD recently warned, these tools risk fostering &#8220;metacognitive laziness&#8221; rather than deep learning. What stands in the way of Big Tech&#8217;s financial gains is a well-informed consumer, and controlling the AI narrative is the path they&#8217;ve chosen.</p><blockquote><p><em>To dive deeper, <strong><a href="https://montrealethics.ai/tech-futures-co-opting-research-and-education/">read the full article here</a></strong><a href="https://montrealethics.ai/tech-futures-co-opting-research-and-education/https://montrealethics.ai/tech-futures-co-opting-research-and-education/">.</a></em></p></blockquote><h4><strong>AI Policy Corner: Executive Order: Ensuring a National Policy Framework for Artificial Intelligence</strong></h4><p>This edition of our <a href="https://montrealethics.ai/category/insights/ai-policy-corner/">AI Policy Corner, </a>produced in partnership with the <a href="https://cla.purdue.edu/academic/polsci/research/labs/grail/index.html">Governance and Responsible AI Lab (GRAIL)</a> at Purdue University, spotlights the December 11 Executive Order:<a href="https://www.whitehouse.gov/presidential-actions/2025/12/eliminating-state-law-obstruction-of-national-artificial-intelligence-policy/"> Ensuring a National Policy Framework for Artificial Intelligence</a>, which aims to create a standardized national AI policy in the future. The piece highlights five strategies the Executive Order adopts to foster AI policy cohesion across states: creating a litigation task force, evaluating potential conflicts between state and national AI policies, tying funding eligibility to compliance, preempting certain state laws, and establishing a federal policy framework.</p><blockquote><p><em>To dive deeper, <strong><a href="https://montrealethics.ai/ai-policy-corner-executive-order-ensuring-a-national-policy-framework-for-artificial-intelligence/">read the full article here</a></strong><a href="https://montrealethics.ai/ai-policy-corner-executive-order-ensuring-a-national-policy-framework-for-artificial-intelligence/">.</a></em></p></blockquote><h4><strong>Recess: Is AI in Law School a Helpful Tool or a Hidden Trap?</strong></h4><p>This piece is part of our <a href="https://montrealethics.ai/category/columns/recess/">Recess</a> series, featuring university students from Encode&#8217;s Canadian chapter at McGill University. The series aims to share insights from university students on current issues in AI ethics. In this piece, Emma Edney grapples with the ongoing entanglement between law and AI in Canada, focusing on how AI is used in law school and its limitations, and proposes reforms to the Canadian Bar Exam to protect the "people skills" AI can't replicate.</p><blockquote><p><em>To dive deeper, <strong><a href="https://montrealethics.ai/is-ai-in-law-school-a-helpful-tool-or-a-hidden-trap/">read the full article here</a></strong><a href="https://montrealethics.ai/is-ai-in-law-school-a-helpful-tool-or-a-hidden-trap/">.</a></em></p></blockquote><div><hr></div><h3><strong>&#10084;&#65039; Support Our Work</strong></h3><div><hr></div><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!bzIU!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!bzIU!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png 424w, https://substackcdn.com/image/fetch/$s_!bzIU!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png 848w, https://substackcdn.com/image/fetch/$s_!bzIU!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png 1272w, https://substackcdn.com/image/fetch/$s_!bzIU!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!bzIU!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png" width="1456" height="213" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:213,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://brief.montrealethics.ai/i/156386126?img=https%3A%2F%2Fsubstackcdn.com%2Fimage%2Ffetch%2Ff_auto%2Cq_auto%3Agood%2Cfl_progressive%3Asteep%2Fhttps%253A%252F%252Fsubstack-post-media.s3.amazonaws.com%252Fpublic%252Fimages%252F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!bzIU!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png 424w, https://substackcdn.com/image/fetch/$s_!bzIU!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png 848w, https://substackcdn.com/image/fetch/$s_!bzIU!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png 1272w, https://substackcdn.com/image/fetch/$s_!bzIU!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p>Help us keep <em><strong>The AI Ethics Brief</strong></em> free and accessible for everyone by becoming a paid subscriber on Substack or making a donation at <strong><a href="https://montrealethics.ai/donate">montrealethics.ai/donate</a>. </strong>Your support sustains our mission of democratizing AI ethics literacy and honours <strong><a href="https://montrealethics.ai/open-letter-moving-forward-together/">Abhishek Gupta&#8217;s</a></strong> legacy.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://brief.montrealethics.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://brief.montrealethics.ai/subscribe?"><span>Subscribe now</span></a></p><p>For corporate partnerships or larger contributions, please contact us at <strong><a href="mailto:support@montrealethics.ai">support@montrealethics.ai</a></strong></p><div><hr></div><h3>&#9989; Take Action:</h3><div><hr></div><p>Have an article, research paper, or news item we should feature? <strong>Leave us a comment below</strong> &#8212; we&#8217;d love to hear from you!</p>]]></content:encoded></item><item><title><![CDATA[The AI Ethics Brief #182: When Guardrails Fracture]]></title><description><![CDATA[Why 2026 marks the end of "business as usual" for AI ethics.]]></description><link>https://brief.montrealethics.ai/p/the-ai-ethics-brief-182-when-guardrails</link><guid isPermaLink="false">https://brief.montrealethics.ai/p/the-ai-ethics-brief-182-when-guardrails</guid><dc:creator><![CDATA[Montreal AI Ethics Institute]]></dc:creator><pubDate>Tue, 20 Jan 2026 15:03:20 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!cTdn!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F445b3fc6-b44c-43eb-b42b-957ceae7ecb2_2560x1810.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>Welcome to <strong>The AI Ethics Brief</strong>, a bi-weekly publication by the Montreal AI Ethics Institute. We publish <strong>every other Tuesday at 10 AM ET.</strong> Follow MAIEI on <a href="https://bsky.app/profile/mtlaiethics.bsky.social">Bluesky</a> and <a href="https://www.linkedin.com/company/mtlaiethics">LinkedIn.</a> </em></p><p>&#10084;&#65039; <a href="https://montrealethics.ai/donate/">Support Our Work.</a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://brief.montrealethics.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://brief.montrealethics.ai/subscribe?"><span>Subscribe now</span></a></p><div><hr></div><h3><strong>&#128204; Editor&#8217;s Note</strong></h3><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!cTdn!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F445b3fc6-b44c-43eb-b42b-957ceae7ecb2_2560x1810.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!cTdn!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F445b3fc6-b44c-43eb-b42b-957ceae7ecb2_2560x1810.png 424w, https://substackcdn.com/image/fetch/$s_!cTdn!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F445b3fc6-b44c-43eb-b42b-957ceae7ecb2_2560x1810.png 848w, https://substackcdn.com/image/fetch/$s_!cTdn!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F445b3fc6-b44c-43eb-b42b-957ceae7ecb2_2560x1810.png 1272w, https://substackcdn.com/image/fetch/$s_!cTdn!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F445b3fc6-b44c-43eb-b42b-957ceae7ecb2_2560x1810.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!cTdn!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F445b3fc6-b44c-43eb-b42b-957ceae7ecb2_2560x1810.png" width="1456" height="1029" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/445b3fc6-b44c-43eb-b42b-957ceae7ecb2_2560x1810.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1029,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:9039981,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://brief.montrealethics.ai/i/185050594?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F445b3fc6-b44c-43eb-b42b-957ceae7ecb2_2560x1810.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!cTdn!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F445b3fc6-b44c-43eb-b42b-957ceae7ecb2_2560x1810.png 424w, https://substackcdn.com/image/fetch/$s_!cTdn!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F445b3fc6-b44c-43eb-b42b-957ceae7ecb2_2560x1810.png 848w, https://substackcdn.com/image/fetch/$s_!cTdn!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F445b3fc6-b44c-43eb-b42b-957ceae7ecb2_2560x1810.png 1272w, https://substackcdn.com/image/fetch/$s_!cTdn!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F445b3fc6-b44c-43eb-b42b-957ceae7ecb2_2560x1810.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Credit: The Path by Sophie Valeix &amp; Digit / <a href="https://betterimagesofai.org/images?artist=SophieValeix&amp;title=ThePath">Better Images of AI</a> / <a href="https://creativecommons.org/licenses/by/4.0/">CC BY 4.0</a></figcaption></figure></div><div><hr></div><h4><strong>In this Edition (TL;DR)</strong></h4><ul><li><p><strong>ELITE App</strong>: ICE&#8217;s new Palantir-built geospatial targeting tool turns surveillance research into operational deportation infrastructure.</p></li><li><p><strong>Venezuela &amp; Greenland</strong>: Two developments reshaping assumptions underlying AI governance frameworks.</p></li><li><p><strong>Professional Purpose</strong>: Clarifying where AI ethics energy is best spent in the current moment.</p></li><li><p><strong>Four Things to Watch in 2026</strong>: Synthetic content, critical AI literacy, agentic surveillance, and labour organizing.</p></li><li><p><strong>AI Policy Corner</strong>: South Korea&#8217;s AI Framework Act takes effect January 22, 2026.</p></li><li><p><strong>Recess</strong>: How algorithmic social media is fragmenting the public sphere and what Canada can learn from the EU&#8217;s Digital Service Act.</p></li></ul><p><strong>What connects these stories:</strong> As international norms fracture (see: <em>Venezuela &amp; Greenland</em>), the operational tools of enforcement are sharpening (see: <em>Palantir&#8217;s ELITE)</em>. We explore what this divergence means for the future of AI advocacy.</p><div><hr></div><h3><strong>&#128680;Recent Developments: What We&#8217;re Tracking</strong></h3><div><hr></div><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!bBvf!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!bBvf!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 424w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 848w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1272w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp" width="1456" height="137" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:137,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1704,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:&quot;&quot;,&quot;type&quot;:&quot;image/webp&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://brief.montrealethics.ai/i/159128290?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!bBvf!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 424w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 848w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1272w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><h3><strong>The Surveillance Infrastructure Is Now Operational</strong></h3><p>When we released <a href="https://brief.montrealethics.ai/p/special-edition-state-of-ai-ethics">SAIER Volume 7 in November 2025</a>, we documented the growing integration of surveillance technologies into enforcement systems. In January 2026, we can now point to a specific application: <a href="https://newrepublic.com/post/205333/ice-palantir-app-raid-deportation">ELITE (Enhanced Leads Identification &amp; Targeting for Enforcement) by Palantir.</a></p><h4><strong>What ELITE Does</strong></h4><p>According to reporting by <a href="https://www.404media.co/elite-the-palantir-app-ice-uses-to-find-neighborhoods-to-raid/">404 Media</a>, internal ICE materials, and court testimony from a deportation officer:</p><ul><li><p><strong>Geospatial targeting</strong>: The Palantir app populates a map with potential deportation targets, allowing agents to draw shapes around neighbourhoods or apartment complexes and query all known individuals within that area.</p></li><li><p><strong>Dossier generation</strong>: Each target has a profile including name, date of birth, photograph, and Alien Registration Number.</p></li><li><p><strong>Confidence scoring</strong>: The system calculates &#8220;address confidence scores&#8221; (0-100) based on data recency and source reliability, including utility bills, medical records, DMV updates, and court appearances.</p></li><li><p><strong>Data sources</strong>: The app integrates information from the Department of Health and Human Services (<a href="https://www.eff.org/deeplinks/2026/01/report-ice-using-palantir-tool-feeds-medicaid-data">including Medicaid</a>), along with federal repositories, utility providers, and court records.</p></li></ul><p>As one deportation officer testified, agents use it <a href="https://innovationlawlab.org/news-and-analysis/dhss-operation-black-rose-early-analysis">&#8220;kind of like Google Maps,&#8221;</a> but instead of finding restaurants, they identify locations with high concentrations of potential targets.</p><h4><strong>Why This Matters</strong></h4><p>ELITE represents the operationalization of what SAIER Volume 7&#8217;s <a href="https://brief.montrealethics.ai/p/the-ai-ethics-brief-178-how-regulation">social justice section</a> documented as <strong>emerging risk.</strong> The shift is significant:</p><ul><li><p><strong>From individual targeting to neighbourhood-level analysis</strong>: The tool enables bulk identification rather than case-by-case investigation.</p></li><li><p><strong>From human judgment to statistical prioritization</strong>: Confidence scores determine which addresses receive enforcement attention.</p></li><li><p><strong>From pilot programs to permanent infrastructure</strong>: Palantir&#8217;s <a href="https://www.executivegov.com/articles/palantir-ice-contract-immigrationos">$29.9 million ICE contract extension</a> signals ongoing investment.</p></li></ul><p>This is not a theoretical harm. In <strong>Woodburn, Oregon,</strong> raids have already been conducted based on ELITE-generated intelligence. In <strong>Minneapolis,</strong> the infrastructure is actively supporting enforcement operations.</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!bBvf!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!bBvf!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 424w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 848w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1272w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp" width="1456" height="137" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:137,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1704,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:&quot;&quot;,&quot;type&quot;:&quot;image/webp&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://brief.montrealethics.ai/i/159128290?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!bBvf!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 424w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 848w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1272w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><h3><strong>Two Developments Reshaping the Governance Landscape</strong></h3><p>AI governance frameworks, from the <a href="https://artificialintelligenceact.eu/the-act/">EU AI Act</a> to voluntary corporate commitments, generally assume a baseline of international law, institutional constraints, and good-faith consultation. Two January developments warrant attention for what they suggest about that baseline.</p><p><strong>Venezuela</strong>: On January 3, 2026, U.S. military forces captured <a href="https://news.usni.org/2026/01/07/report-to-congress-on-u-s-capture-of-venezuelas-nicolas-maduro">Venezuelan President Nicol&#225;s Maduro in Caracas</a>, transferring him to New York to face narco-terrorism charges from a 2020 federal indictment. The UN Secretary General stated he remains <a href="https://www.un.org/sg/en/content/sg/statements/2026-01-05/secretary-generals-remarks-the-security-council-venezuela">&#8220;deeply concerned that rules of international law have not been respected.&#8221;</a></p><p><strong>Greenland</strong>: President Trump has intensified efforts to acquire Greenland, with the White House confirming <a href="https://www.cbc.ca/news/world/white-house-meeting-denmark-greenland-9.7044695">&#8220;all options are on the table&#8221;</a> including military action against a NATO ally. On January 17, Trump <a href="https://www.nbcnews.com/business/economy/trump-denmark-european-tariffs-greenland-deal-rcna254551">announced tariffs</a> on eight European countries until a deal is reached. In turn, European leaders warned in a joint statement that these tariff threats <a href="https://www.consilium.europa.eu/en/press/press-releases/2026/01/17/joint-statement-by-president-costa-and-by-president-von-der-leyen-on-greenland/?utm_source=brevo&amp;utm_campaign=AUTOMATED%20-%20Alert%20-%20Newsletter&amp;utm_medium=email&amp;utm_id=3318">&#8220;undermine transatlantic relations and risk a dangerous downward spiral,&#8221;</a> raising concerns about the stability of transatlantic data flows and trade agreements that underpin frameworks like the EU AI Act.</p><h4><strong>What This Means for AI Governance</strong></h4><p>AI governance frameworks depend on states honouring agreements, submitting to shared oversight, and engaging in good-faith negotiation. The EU AI Act&#8217;s extraterritorial reach, for instance, assumes trading partners will comply rather than retaliate. Cross-border data governance assumes mutual recognition of legal processes. Multilateral AI safety commitments, like those from the <a href="https://futureoflife.org/project/ai-safety-summits/">AI Safety Summits</a>, assume participants won&#8217;t simply exit when constraints become inconvenient.</p><p>When a major power demonstrates willingness to bypass these norms in other domains, it raises practical questions: Will AI-specific agreements hold? Can smaller nations negotiate meaningful terms, or will market access be leveraged to override local governance choices? These are questions we&#8217;ll be following closely in 2026. </p><div><hr></div><h3><strong>&#128270; Looking Ahead</strong></h3><div><hr></div><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!bBvf!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!bBvf!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 424w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 848w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1272w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp" width="1456" height="137" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:137,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1704,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:&quot;&quot;,&quot;type&quot;:&quot;image/webp&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://brief.montrealethics.ai/i/159128290?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!bBvf!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 424w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 848w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1272w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><h3><strong>Strategic Outlook: Clarifying Our Purpose in 2026</strong></h3><p>In light of this crumbling international consensus, the current moment requires honesty about what pathways remain open for AI ethics work. </p><p>Here are some questions and thoughts on our minds as we go into 2026.</p><h4><strong>The Reality Check</strong></h4><p>Many in this field, MAIEI included, entered believing that evidence-based advocacy could steer institutional policy, i.e., that documenting harms, proposing frameworks, and engaging regulators would bend the curve toward safety. That era appears to be closing.</p><p>We should be clear-eyed: some of these pathways have narrowed. When corporate AI ethics teams are restructured into &#8220;compliance&#8221; functions, when consultation periods become procedural checkboxes, and when international coordination faces fundamental obstacles, the theory of change shifts.</p><h4><strong>A Return to Roots</strong></h4><p>This is not despair, and it&#8217;s not retreat. It&#8217;s clarification. The work of the Montreal AI Ethics Institute was never primarily about influencing powerful institutions from above. It has always been about:</p><ul><li><p><strong>Literacy</strong>: Building the capacity for communities to understand algorithmic systems shaping their lives.</p></li><li><p><strong>Documentation</strong>: Creating the evidentiary record of how AI systems operate and whom they affect.</p></li><li><p><strong>Solidarity</strong>: Connecting practitioners, researchers, and affected communities across sectors and borders.</p></li></ul><p>These remain viable. In some ways, they&#8217;ve become more urgent.</p><h4><strong>Where We Focus Now</strong></h4><p>As we move through 2026, we see particular value in:</p><ul><li><p><strong>Supporting community-based resistance</strong>: Organizations helping people contest and navigate automated systems affecting them directly.</p></li><li><p><strong>Building critical AI literacy</strong>: Not just technical understanding, but the capacity to ask who benefits, who bears costs, and who decides.</p></li><li><p><strong>Maintaining documentation</strong>: The evidentiary record matters, even when it doesn&#8217;t immediately change policy.</p></li></ul><p>We are evolving to meet the moment, and this community is our strongest asset. How can we best support your work in 2026? Leave a comment below or email us at <a href="mailto:support@montrealethics.ai">support@montrealethics.ai</a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://brief.montrealethics.ai/p/the-ai-ethics-brief-182-when-guardrails/comments&quot;,&quot;text&quot;:&quot;Leave a comment&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://brief.montrealethics.ai/p/the-ai-ethics-brief-182-when-guardrails/comments"><span>Leave a comment</span></a></p><div><hr></div><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!bBvf!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!bBvf!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 424w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 848w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1272w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp" width="1456" height="137" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:137,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1704,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:&quot;&quot;,&quot;type&quot;:&quot;image/webp&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://brief.montrealethics.ai/i/159128290?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!bBvf!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 424w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 848w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1272w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><h3><strong>Four Things to Watch in 2026</strong></h3><h4><strong>1. Synthetic Content in Governance Contexts</strong></h4><p>AI <strong>&#8220;slop&#8221;</strong>, the low-quality, AI-generated content flooding digital platforms, was <a href="https://www.merriam-webster.com/wordplay/word-of-the-year">Merriam-Webster&#8217;s 2025 Word of the Year.</a> The concern for 2026 is its application in governance contexts.</p><p><strong>What to watch</strong>: AI-generated content that obscures the line between authentic public response and manufactured narrative, particularly during crises or contested political moments. <a href="https://www.nbcnews.com/tech/security/online-propaganda-campaigns-are-using-ai-slop-researchers-say-rcna244618">Researchers at Graphika</a> found that major state-sponsored propaganda campaigns have &#8220;systematically integrated AI tools,&#8221; often producing low-quality content, but at unprecedented volume.</p><p><strong>Why it matters</strong>: When it becomes harder to distinguish genuine public sentiment from synthetic content, the informational foundations of democratic governance erode.</p><h4><strong>2. Critical AI Literacy as Community Infrastructure</strong></h4><p>Academic research on <a href="https://www.sciencedirect.com/science/article/pii/S2212868924000771">&#8220;critical AI literacy&#8221;</a> has been developing frameworks for years. In 2026, we expect to see this work translated into practical community resources.</p><p><strong>What to watch</strong>: Community organizations building capacity to understand algorithmic systems, not just how they work technically, but how they&#8217;re entangled with broader systems of power. The Open University, for example, released a <a href="https://www.open.ac.uk/blogs/learning-design/wp-content/uploads/2025/01/OU-Critical-AI-Literacy-framework-2025-external-sharing.pdf">framework for critical AI literacy skills</a> in Spring 2025. Similar resources are emerging across educational and community contexts.</p><p><strong>Why it matters</strong>: In the absence of robust top-down governance, bottom-up literacy becomes essential for communities to navigate automated systems affecting their lives.</p><h4><strong>3. Expansion of Agentic Surveillance Tools</strong></h4><p>ELITE represents a specific instantiation of a broader trend: AI systems that operate with minimal human oversight, using statistical confidence scores to prioritize enforcement actions.</p><p><strong>What to watch</strong>: The extension of this approach beyond immigration enforcement into other domestic contexts: predictive policing, benefits administration, and housing decisions.</p><p><strong>Why it matters</strong>: These tools shift accountability from individual decision-makers to algorithmic systems, creating what researchers have called a <a href="https://pmc.ncbi.nlm.nih.gov/articles/PMC12482494/">&#8220;responsibility vacuum.&#8221;</a></p><h4><strong>4. Labour Organizing as Ethical Guardrails</strong></h4><p>In October 2025, the <a href="https://aflcio.org/reports/workers-first-ai">AFL-CIO launched AI guidelines</a> calling for transparency, human review of automated decisions, and the prohibition of AI as a surveillance tool. Major unions, including the<a href="https://perkinscoie.com/insights/blog/generative-ai-movies-and-tv-how-2023-sag-aftra-and-wga-contracts-address-generative"> WGA, SAG-AFTRA,</a> and the <a href="https://www.freightwaves.com/news/ila-sends-longshore-contract-to-rank-and-file-for-approval">International Longshoremen&#8217;s Association</a>, have all won binding AI provisions.</p><p>In Canada, 2025 also marked a turning point: <a href="https://brief.montrealethics.ai/i/181025583/saier-is-back-part-iii-sectoral-applications">ACTRA&#8217;s Independent Production Agreement and British Columbia&#8217;s Master Production Agreement</a> now include AI provisions built on <strong>consent, compensation, and control</strong>, requiring performer approval for digital replicas, payment for all uses, and safe data storage. The agreements also address &#8220;synthetic performers,&#8221; recognizing AI-generated assets as labour deserving compensation. (See <a href="https://montrealethics.ai/wp-content/uploads/2025/11/MAIEI_SAIER_2025_Volume7.pdf">SAIER Volume 7, Chapter 11.2, for full analysis by ACTRA&#8217;s Anna Sikorski and Kent Sikstrom</a>).</p><p><strong>What to watch</strong>: Collective bargaining agreements establishing concrete AI constraints. ACTRA is also tracking <a href="https://www.millerthomson.com/en/insights/technology-ip-and-privacy/regulating-deepfakes-through-copyright-would-denmarks-proposed-approach-be-right-for-canada/">Denmark&#8217;s copyright proposals</a>, which would grant individuals copyright over their likeness and voice.</p><p><strong>Why it matters</strong>: In the absence of comprehensive regulation, labour agreements provide sector-specific guardrails, though as ACTRA notes, unions cannot bear this issue alone.</p><div><hr></div><h3><strong>&#128173; Insights &amp; Perspectives:</strong></h3><div><hr></div><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!bBvf!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!bBvf!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 424w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 848w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1272w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp" width="1456" height="137" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:137,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1704,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:&quot;&quot;,&quot;type&quot;:&quot;image/webp&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://brief.montrealethics.ai/i/159128290?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!bBvf!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 424w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 848w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1272w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><h4><strong>AI Policy Corner: AI Governance in East Asia: Comparing the AI Acts of South Korea and Japan</strong></h4><p>This edition of our <a href="https://montrealethics.ai/category/insights/ai-policy-corner/">AI Policy Corner, </a>produced in partnership with the <a href="https://cla.purdue.edu/academic/polsci/research/labs/grail/index.html">Governance and Responsible AI Lab (GRAIL)</a> at Purdue University, examines South Korea&#8217;s AI Framework Act and compares it with Japan&#8217;s AI Promotion Act. South Korea&#8217;s &#8220;Framework Act on the Development of Artificial Intelligence and Establishment of Trust Foundation&#8221; takes effect on January 22, 2026, making South Korea the first country to enforce a comprehensive AI regulatory framework. Both countries share the goal of national AI competitiveness, with Japan aspiring to be "the most AI-friendly country," and South Korea aiming for "global top-three AI power." However, South Korea's framework addresses high-risk areas that Japan's does not, while both remain more relaxed than the EU AI Act's comprehensive obligations.</p><blockquote><p><em>To dive deeper, <strong><a href="https://montrealethics.ai/ai-policy-corner-ai-governance-in-east-asia-comparing-the-ai-acts-of-south-korea-and-japan/">read the full article here</a></strong><a href="https://montrealethics.ai/ai-policy-corner-u-s-executive-order-on-advancing-ai-education-for-american-youth/https://montrealethics.ai/ai-policy-corner-ai-governance-in-east-asia-comparing-the-ai-acts-of-south-korea-and-japan/">.</a></em></p></blockquote><h4><strong>Recess: Reprogramming the Public Sphere: AI and News Visibility on Social Media</strong></h4><p>This piece is part of our <a href="https://montrealethics.ai/category/columns/recess/">Recess</a> series, featuring university students from Encode&#8217;s Canadian chapter at McGill University. The series aims to promote insights from university students on current issues in the AI ethics space. In this article, Natalie Jenkins notes how 62% of young Canadians aged 15-24 get their news from social media. Natalie examines what this means for democracy when AI-powered recommendation systems, not journalists, decide what information reaches us.</p><p><strong>Key insights:</strong></p><ul><li><p>Platforms have replaced journalists as primary news intermediaries, with algorithmic curation based on engagement rather than civic value</p></li><li><p>Facebook&#8217;s algorithm changes resulted in a 78% decline in news reactions between 2021 and 2024</p></li><li><p>Political influencers exploit engagement-based metrics, gaining visibility through outrage rather than accountability</p></li><li><p>TikTok&#8217;s recommender system can learn user vulnerabilities in under 40 minutes</p></li></ul><p>Natalie argues Canada should look to the <a href="https://www.isdglobal.org/explainers/eu-digital-services-act/">EU&#8217;s Digital Services Act</a> as a model, particularly its requirements for algorithmic transparency and user control over recommendation systems.</p><blockquote><p><em>To dive deeper, <strong><a href="https://montrealethics.ai/reprogramming-the-public-sphere-ai-and-news-visibility-on-social-media/">read the full article here</a></strong><a href="https://montrealethics.ai/reprogramming-the-public-sphere-ai-and-news-visibility-on-social-media/">.</a></em></p></blockquote><div><hr></div><h3><strong>&#10084;&#65039; Support Our Work</strong></h3><div><hr></div><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!bzIU!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!bzIU!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png 424w, https://substackcdn.com/image/fetch/$s_!bzIU!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png 848w, https://substackcdn.com/image/fetch/$s_!bzIU!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png 1272w, https://substackcdn.com/image/fetch/$s_!bzIU!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!bzIU!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png" width="1456" height="213" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:213,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://brief.montrealethics.ai/i/156386126?img=https%3A%2F%2Fsubstackcdn.com%2Fimage%2Ffetch%2Ff_auto%2Cq_auto%3Agood%2Cfl_progressive%3Asteep%2Fhttps%253A%252F%252Fsubstack-post-media.s3.amazonaws.com%252Fpublic%252Fimages%252F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!bzIU!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png 424w, https://substackcdn.com/image/fetch/$s_!bzIU!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png 848w, https://substackcdn.com/image/fetch/$s_!bzIU!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png 1272w, https://substackcdn.com/image/fetch/$s_!bzIU!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p>Help us keep <em><strong>The AI Ethics Brief</strong></em> free and accessible for everyone by becoming a paid subscriber on Substack or making a donation at <strong><a href="https://montrealethics.ai/donate">montrealethics.ai/donate</a>. </strong>Your support sustains our mission of democratizing AI ethics literacy and honours <strong><a href="https://montrealethics.ai/open-letter-moving-forward-together/">Abhishek Gupta&#8217;s</a></strong> legacy.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://brief.montrealethics.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://brief.montrealethics.ai/subscribe?"><span>Subscribe now</span></a></p><p>For corporate partnerships or larger contributions, please contact us at <strong><a href="mailto:support@montrealethics.ai">support@montrealethics.ai</a></strong></p><div><hr></div><h3>&#9989; Take Action:</h3><div><hr></div><p>Have an article, research paper, or news item we should feature? <strong>Leave us a comment below</strong> &#8212; we&#8217;d love to hear from you!</p>]]></content:encoded></item><item><title><![CDATA[The AI Ethics Brief #181: We're Stronger Together: Collective Action & AI Ethics]]></title><description><![CDATA[Happy New Year! We kick off our first Brief of 2026 with our State of AI Ethics Report Part V: Collective Action, and mark the return of our Recess collaboration with Encode Canada]]></description><link>https://brief.montrealethics.ai/p/the-ai-ethics-brief-181-were-stronger</link><guid isPermaLink="false">https://brief.montrealethics.ai/p/the-ai-ethics-brief-181-were-stronger</guid><dc:creator><![CDATA[Montreal AI Ethics Institute]]></dc:creator><pubDate>Tue, 06 Jan 2026 15:03:11 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!QapZ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd922b80f-91a1-465f-85e8-59b5f4ceb3c6_1600x1118.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>Welcome to <strong>The AI Ethics Brief</strong>, a bi-weekly publication by the Montreal AI Ethics Institute. We publish <strong>every other Tuesday at 10 AM ET.</strong> Follow MAIEI on <a href="https://bsky.app/profile/mtlaiethics.bsky.social">Bluesky</a> and <a href="https://www.linkedin.com/company/mtlaiethics">LinkedIn.</a> </em></p><p>&#10084;&#65039; <a href="https://montrealethics.ai/donate/">Support Our Work.</a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://brief.montrealethics.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://brief.montrealethics.ai/subscribe?"><span>Subscribe now</span></a></p><div><hr></div><h3><strong>&#128204; Editor&#8217;s Note</strong></h3><div><hr></div><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!QapZ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd922b80f-91a1-465f-85e8-59b5f4ceb3c6_1600x1118.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!QapZ!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd922b80f-91a1-465f-85e8-59b5f4ceb3c6_1600x1118.png 424w, https://substackcdn.com/image/fetch/$s_!QapZ!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd922b80f-91a1-465f-85e8-59b5f4ceb3c6_1600x1118.png 848w, https://substackcdn.com/image/fetch/$s_!QapZ!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd922b80f-91a1-465f-85e8-59b5f4ceb3c6_1600x1118.png 1272w, https://substackcdn.com/image/fetch/$s_!QapZ!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd922b80f-91a1-465f-85e8-59b5f4ceb3c6_1600x1118.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!QapZ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd922b80f-91a1-465f-85e8-59b5f4ceb3c6_1600x1118.png" width="1456" height="1017" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/d922b80f-91a1-465f-85e8-59b5f4ceb3c6_1600x1118.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1017,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;The painting shows a person standing on a staircase made of green and pink cubes, symbolising a Penrose staircase, in a cosmic environment. The person is reaching towards a glowing cross-shaped structure emitting binary code, representing AI's reach into the future. Surrounding the figure are outlined boxes showing various elements, such as glasses, medical tools, a self-driving car, and financial symbols, interconnected by white lines. The background is dark with star-like dots and features colour-coded boxes which mark different elements as relating to AI, human involvement, a combination of both, or an area uncharted by AI and humans.&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="The painting shows a person standing on a staircase made of green and pink cubes, symbolising a Penrose staircase, in a cosmic environment. The person is reaching towards a glowing cross-shaped structure emitting binary code, representing AI's reach into the future. Surrounding the figure are outlined boxes showing various elements, such as glasses, medical tools, a self-driving car, and financial symbols, interconnected by white lines. The background is dark with star-like dots and features colour-coded boxes which mark different elements as relating to AI, human involvement, a combination of both, or an area uncharted by AI and humans." title="The painting shows a person standing on a staircase made of green and pink cubes, symbolising a Penrose staircase, in a cosmic environment. The person is reaching towards a glowing cross-shaped structure emitting binary code, representing AI's reach into the future. Surrounding the figure are outlined boxes showing various elements, such as glasses, medical tools, a self-driving car, and financial symbols, interconnected by white lines. The background is dark with star-like dots and features colour-coded boxes which mark different elements as relating to AI, human involvement, a combination of both, or an area uncharted by AI and humans." srcset="https://substackcdn.com/image/fetch/$s_!QapZ!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd922b80f-91a1-465f-85e8-59b5f4ceb3c6_1600x1118.png 424w, https://substackcdn.com/image/fetch/$s_!QapZ!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd922b80f-91a1-465f-85e8-59b5f4ceb3c6_1600x1118.png 848w, https://substackcdn.com/image/fetch/$s_!QapZ!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd922b80f-91a1-465f-85e8-59b5f4ceb3c6_1600x1118.png 1272w, https://substackcdn.com/image/fetch/$s_!QapZ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd922b80f-91a1-465f-85e8-59b5f4ceb3c6_1600x1118.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Credit: Yutong Liu &amp; The Bigger Picture / <a href="https://betterimagesofai.org">Better Images of AI</a> / <a href="https://creativecommons.org/licenses/by/4.0/">CC BY 4.0</a></figcaption></figure></div><h4><strong>In this Edition (TL;DR)</strong></h4><ul><li><p><strong>Happy 2026! </strong>From everyone at MAIEI, we wish you optimism and strength as we navigate through the AI ethics space.</p></li><li><p><strong>Collective Action:</strong> In Part V of the <em>State of AI Ethics Report </em>(SAIER), we explore emerging initiatives seeking to reclaim decision power regarding AI, across literacy, civil society and government.</p></li><li><p><strong>Recess: Your wrist, your data, their access: Are you trading convenience for control?</strong> This week marks the return of our Recess series, a collaboration with Encode Canada that showcases student insights into current AI applications. This week&#8217;s piece analyzes AI wearables and the ethical grey areas that arise regarding privacy and data collection.</p></li></ul><p><strong>What connects these stories:</strong></p><p>2026 has certainly begun on a low note, with tensions hastily increasing across America. But it is in this context that we call on our readers to find optimism in the good things that make us human. Empathy, care and compassion are just some of the things we can share with one another and with nature; a collective demonstration of solidarity that transcends the superfluous relations dictated by many social norms.</p><p>It is in this context that many initiatives challenging social systems have emerged. Collective action is given centre stage in the final part of the <em>SAIER</em>. Unionization, as <strong>Ana Brandusescu</strong> (McGill University) notes, has become a major force for good across AI ecosystems and may continue to be a key to future movements resisting AI in 2026. But what can people and collectives draw on when campaigning for better AI practices? &#8220;Critical AI literacy&#8221; may be the answer based on <strong>Tania Duarte</strong>&#8217;s (We and AI) analysis, as this emergent field of research and study tackles the entanglement of AI in larger systems of oppression head-on. Conversely, related questions are posed by <strong>Kennedy O&#8217;Neil</strong> (Encode Canada), who challenges individuals to ask about the extractive data practices embedded in wearable technologies.</p><p>With these insights and the many others in the present Brief, the <em>SAIER</em>, and those we produce in the year to come, we hope you find not only the strength to handle adverse news about the world, but a sense of community. Over 20,000 people are joining you on this journey, and many more are believing and fighting for AI ecosystems that reflect a positive AI future.</p><div><hr></div><p>Please share your thoughts with the MAIEI community:</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://brief.montrealethics.ai/p/the-ai-ethics-brief-181-were-stronger/comments&quot;,&quot;text&quot;:&quot;Leave a comment&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://brief.montrealethics.ai/p/the-ai-ethics-brief-181-were-stronger/comments"><span>Leave a comment</span></a></p><div><hr></div><h3><strong>&#127881; </strong><em><strong>Happy 2026!</strong></em></h3><div><hr></div><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!bBvf!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!bBvf!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 424w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 848w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1272w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp" width="1456" height="137" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:137,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1704,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:&quot;&quot;,&quot;type&quot;:&quot;image/webp&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://brief.montrealethics.ai/i/159128290?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!bBvf!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 424w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 848w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1272w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p>2026 presents itself as most years often do: a beacon of hope for positive change. As January follows its course, we may realize that nothing but the year on our calendars has changed. Government policy advances at a sluggish pace, the tangible impacts of climate change remain ever-present, geopolitical conflicts persist or flare up, and power continues to concentrate in the hands of a few individuals and corporations.</p><p>It is in this complex context that we wish you a happy 2026 &#8211; not as a na&#239;ve response to relentless local, national and international injustices, but as an indicator of what can drive the positive change we want to see as we enter the new year.</p><p>Before signing off in 2025, we promised we would bring you and all Brief readers <a href="https://brief.montrealethics.ai/p/the-ai-ethics-brief-180-emerging">more energy in 2026</a>. Today&#8217;s Brief, MAIEI&#8217;s 181st and the year&#8217;s first, is a sign of what is to come: not despair, doomerism, nor frustration, but optimism before the real and potential initiatives people can lead that make a difference.</p><div><hr></div><h3><strong>&#127881; </strong><em><strong>SAIER </strong></em><strong>is back - Part V: Collective Action</strong></h3><div><hr></div><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!bBvf!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!bBvf!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 424w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 848w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1272w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp" width="1456" height="137" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:137,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1704,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:&quot;&quot;,&quot;type&quot;:&quot;image/webp&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://brief.montrealethics.ai/i/159128290?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!bBvf!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 424w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 848w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1272w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p>On November 4th, 2025, we launched <em><a href="https://montrealethics.ai/saier-vol-7-landing/">SAIER Volume 7</a></em>, a snapshot of the global AI ethics arena, with perspectives from Canada, the US, Africa, Asia and Europe. Of course, not all voices could be captured, but <em>SAIER Volume 7</em> presents a holistic understanding of the current state of play in the world of AI ethics. Part V of the report &#8212;which includes nine essays over three chapters&#8212; serves as a primer on collective initiatives that seek to empower the masses rather than accept AI as a technology controlled by and benefiting the few.</p><p>Chapter 15 tackles AI literacy, which <strong>Kate Arthur</strong> argues is a right and not a luxury; a necessity during the ongoing fourth industrial revolution. Arthur explains this has become obvious to policy-makers in the past year, mentioning initiatives developed across North America, Europe and Asia. But are AI literacy frameworks good enough? <strong>Tania Duarte</strong> (We and AI) calls into question the efficacy of many such programs, asking for confidence in <em>assessing</em> if and when their use is appropriate. <strong>Jae-Seong Lee</strong> (Electronics and Telecommunications Research Institute) brings Chapter 15 to a close with what may follow from AI literacy: not just understanding but <em>co-creation </em>and <em>co-production</em>. Lee uses the case study of <a href="https://vtaiwan.tw/intro/">Taiwan</a> to explain co-production, whereby &#8220;participants learn by doing: reviewing data, proposing policy alternatives, and translating technical issues into public dialogue.&#8221; Conversely, <a href="https://oecd.ai/en/dashboards/policy-initiatives/amsterdams-ai-register-8123">Amsterdam&#8217;s algorithm register</a> serves to illustrate co-production, where &#8220;participants act as co-governors, deciding which functions to maintain, retire, or reform. This empowerment turns oversight into shared governance, embedding AI literacy in everyday democratic life.&#8221;</p><p>Who is needed to shape the AI literacy agenda? Chapter 16 of the <em>SAIER</em> suggests that civil society is central to the effort. <strong>Michelle Baldwin</strong> (Equity Cubed) and <strong>Alex Tveit</strong> (Sustainable Impact Foundation) begin the chapter by suggesting civil society&#8217;s potential &#8220;ethical oversight&#8221; and practical implementation given the sector&#8217;s experience with managing complex data ecosystems. In this regard, Baldwin and Tveit see Canada&#8217;s sociohistorical context as being particularly powerful for a civil society that informs decolonial and future-proof AI practices. <strong>Denise Williams</strong> (former CEO of First Nations Technology Council) strengthens this case by articulating the approaches to innovation that would result from Indigenous forms of thought, specifically Seven-generation thinking. In Williams&#8217; words, seven-generation thinking &#8220;asks us to honour the generations who came before and consider carefully those yet to come.&#8221; In practice, Indigenous approaches to AI are already present in the <a href="https://fnigc.ca/ocap-training/">OCAP&#174;</a> (Ownership, Control, Access, and Possession) and <a href="https://www.gida-global.org/care">CARE</a> (Collective benefit, Authority to control, Responsibility, and Ethics) principles. Yet, civil society is also an adopter of AI technologies. <strong>Jenni Warren</strong> and <strong>Bryan Lozano</strong> (Tech:NYC Foundation) remind us of this in the closing of Chapter 16, which introduces case studies that illustrate what works and what doesn&#8217;t work when adopting AI tools across the sector.</p><p>A further important adopter of AI tools is presented in the final chapter of the <em>SAIER</em>: the public sector. <strong>Ana Brandusescu</strong> (McGill University) opens up Chapter 17 with a deep dive into what collective action can look like in government settings. Brandusescu describes how labour unions and whistleblower protections make robust AI governance practices possible. <strong>Tariq Khan</strong> (London Borough of Camden County Council) follows this idea up with two main suggestions: let&#8217;s not celebrate the tenuous productivity gains of AI but the important improvements to accessibility; and let&#8217;s avoid public sector systems becoming beholden to the interests of Big Tech. Finally, <strong>Jennifer Laplante</strong> (Government of Nova Scotia) suggests building the tech futures of the public sector by &#8220;doing the unglamorous work first: strengthening foundations, aligning systems, training people, and building governance that supports safe innovation.&#8221;</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://montrealethics.ai/saier-vol-7-landing/&quot;,&quot;text&quot;:&quot;Find the SAIER here&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://montrealethics.ai/saier-vol-7-landing/"><span>Find the SAIER here</span></a></p><div><hr></div><h3><strong>&#128173; Insights &amp; Perspectives:</strong></h3><div><hr></div><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!bBvf!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!bBvf!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 424w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 848w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1272w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp" width="1456" height="137" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:137,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1704,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:&quot;&quot;,&quot;type&quot;:&quot;image/webp&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://brief.montrealethics.ai/i/159128290?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!bBvf!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 424w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 848w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1272w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><h4><strong>Recess: Your wrist, your data, their access: Are you trading convenience for control?</strong></h4><p>This piece is part of our <a href="https://montrealethics.ai/category/columns/recess/">Recess</a> series, featuring university students from Encode&#8217;s Canadian chapter at McGill University. The series aims to promote insights from university students on current issues in the field of AI ethics. In this article, Kennedy O&#8217;Neil analyzes AI wearables and the ethical grey areas that arise regarding privacy and data collection.</p><p><em>To dive deeper, <strong><a href="https://montrealethics.ai/your-wrist-your-data-their-access-are-you-trading-convenience-for-control/">read the full article here</a></strong><a href="https://montrealethics.ai/your-wrist-your-data-their-access-are-you-trading-convenience-for-control/">.</a></em></p><div><hr></div><h3><strong>&#10084;&#65039; Support Our Work</strong></h3><div><hr></div><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!bzIU!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!bzIU!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png 424w, https://substackcdn.com/image/fetch/$s_!bzIU!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png 848w, https://substackcdn.com/image/fetch/$s_!bzIU!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png 1272w, https://substackcdn.com/image/fetch/$s_!bzIU!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!bzIU!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png" width="1456" height="213" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:213,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://brief.montrealethics.ai/i/156386126?img=https%3A%2F%2Fsubstackcdn.com%2Fimage%2Ffetch%2Ff_auto%2Cq_auto%3Agood%2Cfl_progressive%3Asteep%2Fhttps%253A%252F%252Fsubstack-post-media.s3.amazonaws.com%252Fpublic%252Fimages%252F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!bzIU!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png 424w, https://substackcdn.com/image/fetch/$s_!bzIU!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png 848w, https://substackcdn.com/image/fetch/$s_!bzIU!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png 1272w, https://substackcdn.com/image/fetch/$s_!bzIU!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p>Help us keep <em><strong>The AI Ethics Brief</strong></em> free and accessible for everyone by becoming a paid subscriber on Substack or making a donation at <strong><a href="https://montrealethics.ai/donate">montrealethics.ai/donate</a>. </strong>Your support sustains our mission of democratizing AI ethics literacy and honours <strong><a href="https://montrealethics.ai/open-letter-moving-forward-together/">Abhishek Gupta&#8217;s</a></strong> legacy.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://brief.montrealethics.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://brief.montrealethics.ai/subscribe?"><span>Subscribe now</span></a></p><p>For corporate partnerships or larger contributions, please contact us at <strong><a href="mailto:support@montrealethics.ai">support@montrealethics.ai</a></strong></p><div><hr></div><h3>&#9989; Take Action:</h3><div><hr></div><p>Have an article, research paper, or news item we should feature? <strong>Leave us a comment below</strong> &#8212; we&#8217;d love to hear from you!</p>]]></content:encoded></item><item><title><![CDATA[The AI Ethics Brief #180: Emerging Tech with Established Issues]]></title><description><![CDATA[We wish you Happy Holidays before diving into our State of AI Ethics Report Part IV: Emerging Technologies, while Sun-Gyoo Kang explores AI agents and their effect on e-commerce]]></description><link>https://brief.montrealethics.ai/p/the-ai-ethics-brief-180-emerging</link><guid isPermaLink="false">https://brief.montrealethics.ai/p/the-ai-ethics-brief-180-emerging</guid><dc:creator><![CDATA[Montreal AI Ethics Institute]]></dc:creator><pubDate>Tue, 23 Dec 2025 15:02:56 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!6gal!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb8eb9a9b-9d70-4b82-9355-44504e9ccbb2_1280x720.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>Welcome to <strong>The AI Ethics Brief</strong>, a bi-weekly publication by the Montreal AI Ethics Institute. We publish <strong>every other Tuesday at 10 AM ET.</strong> Follow MAIEI on<a href="https://bsky.app/profile/mtlaiethics.bsky.social"> Bluesky</a> and<a href="https://www.linkedin.com/company/mtlaiethics"> LinkedIn.</a></em></p><p>&#10084;&#65039;<a href="https://montrealethics.ai/donate/"> Support Our Work.</a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://brief.montrealethics.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://brief.montrealethics.ai/subscribe?"><span>Subscribe now</span></a></p><div><hr></div><h3><strong>&#128204; Editor&#8217;s Note</strong></h3><div><hr></div><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!6gal!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb8eb9a9b-9d70-4b82-9355-44504e9ccbb2_1280x720.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!6gal!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb8eb9a9b-9d70-4b82-9355-44504e9ccbb2_1280x720.png 424w, https://substackcdn.com/image/fetch/$s_!6gal!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb8eb9a9b-9d70-4b82-9355-44504e9ccbb2_1280x720.png 848w, https://substackcdn.com/image/fetch/$s_!6gal!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb8eb9a9b-9d70-4b82-9355-44504e9ccbb2_1280x720.png 1272w, https://substackcdn.com/image/fetch/$s_!6gal!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb8eb9a9b-9d70-4b82-9355-44504e9ccbb2_1280x720.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!6gal!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb8eb9a9b-9d70-4b82-9355-44504e9ccbb2_1280x720.png" width="1280" height="720" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/b8eb9a9b-9d70-4b82-9355-44504e9ccbb2_1280x720.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:720,&quot;width&quot;:1280,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:946269,&quot;alt&quot;:&quot;An older black woman in a leather jacket and short hair is laughing with her friend, an older white woman in a zebra print blouse with bobbed hair and sunglasses on her head are both laughing. They are standing in front of a display of past technological inventions - a computer server, a telephone and a wireless radio. The white woman is holding a mobile phone, chat ChatGPT is loaded on the screen. The women are comparing ChatGPT to previous technological breakthroughs.&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://brief.montrealethics.ai/i/182311177?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb8eb9a9b-9d70-4b82-9355-44504e9ccbb2_1280x720.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="An older black woman in a leather jacket and short hair is laughing with her friend, an older white woman in a zebra print blouse with bobbed hair and sunglasses on her head are both laughing. They are standing in front of a display of past technological inventions - a computer server, a telephone and a wireless radio. The white woman is holding a mobile phone, chat ChatGPT is loaded on the screen. The women are comparing ChatGPT to previous technological breakthroughs." title="An older black woman in a leather jacket and short hair is laughing with her friend, an older white woman in a zebra print blouse with bobbed hair and sunglasses on her head are both laughing. They are standing in front of a display of past technological inventions - a computer server, a telephone and a wireless radio. The white woman is holding a mobile phone, chat ChatGPT is loaded on the screen. The women are comparing ChatGPT to previous technological breakthroughs." srcset="https://substackcdn.com/image/fetch/$s_!6gal!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb8eb9a9b-9d70-4b82-9355-44504e9ccbb2_1280x720.png 424w, https://substackcdn.com/image/fetch/$s_!6gal!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb8eb9a9b-9d70-4b82-9355-44504e9ccbb2_1280x720.png 848w, https://substackcdn.com/image/fetch/$s_!6gal!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb8eb9a9b-9d70-4b82-9355-44504e9ccbb2_1280x720.png 1272w, https://substackcdn.com/image/fetch/$s_!6gal!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb8eb9a9b-9d70-4b82-9355-44504e9ccbb2_1280x720.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h6>Credit: Ada Ju&#353;i&#263; &amp; Eleonora Lima (KCL) / <a href="https://betterimagesofai.org">Better Images of AI</a> / <a href="https://creativecommons.org/licenses/by/4.0/">CC BY 4.0</a></h6><h4><strong>In this Edition (TL;DR)</strong></h4><ul><li><p><strong>Emerging Technologies:</strong> In Part IV of the <em>State of AI Ethics Report </em>(SAIER), we explore emerging applications and scientific practices shaping the AI ethics landscape.</p></li><li><p><strong>Agentic AI systems:</strong> In an op-ed, Sun Gyoo Kang explains the potential for AI agents to transform e-commerce.</p></li></ul><p><strong>What connects these stories:</strong></p><p>With new technologies come new considerations and unknowns. AI agents (and agentic AI in general) boast efficiency and productivity gains, while also ushering in new responsibility, culpability, and privacy considerations.  AI in the military helps put soldiers in less vulnerable situations, but also lowers the barrier to entry for mass conflict as a result.</p><p>Nevertheless, examining these situations reveals common threads that run through all of them: the need for transparency and explainability. Being unable to explain AI agents&#8217; courses of action risks frustration and powerlessness. In the military, these concerns are multiplied due to the context, scale, and intended usage. Consequently, Chapter 15 of the SAIER focuses on reclaiming the agency we lose in these contexts, discussing what openness means in the process of regaining the agency lost in these contexts.</p><p>No matter what innovation comes next, explainability and transparency only become more important, both for our own peace of mind and our agency.</p><div><hr></div><h3><strong>&#127877; Happy Holidays</strong></h3><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!zuCh!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc68a37f5-0861-4390-a577-f042e97136df_1456x137.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!zuCh!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc68a37f5-0861-4390-a577-f042e97136df_1456x137.jpeg 424w, https://substackcdn.com/image/fetch/$s_!zuCh!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc68a37f5-0861-4390-a577-f042e97136df_1456x137.jpeg 848w, https://substackcdn.com/image/fetch/$s_!zuCh!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc68a37f5-0861-4390-a577-f042e97136df_1456x137.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!zuCh!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc68a37f5-0861-4390-a577-f042e97136df_1456x137.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!zuCh!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc68a37f5-0861-4390-a577-f042e97136df_1456x137.jpeg" width="1456" height="137" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c68a37f5-0861-4390-a577-f042e97136df_1456x137.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:137,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!zuCh!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc68a37f5-0861-4390-a577-f042e97136df_1456x137.jpeg 424w, https://substackcdn.com/image/fetch/$s_!zuCh!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc68a37f5-0861-4390-a577-f042e97136df_1456x137.jpeg 848w, https://substackcdn.com/image/fetch/$s_!zuCh!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc68a37f5-0861-4390-a577-f042e97136df_1456x137.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!zuCh!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc68a37f5-0861-4390-a577-f042e97136df_1456x137.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p>2025 has been a year of growth, learning, and gratitude for the team at MAIEI. A year has now passed since our Co-Founder and Principal Researcher, <a href="https://montrealethics.ai/in-memoriam-abhishek-gupta-dec-20-1992-sep-30-2024/">Abhishek Gupta</a>, passed away, whose wisdom and insight in these tumultuous times in the world of AI ethics are sorely missed. Nevertheless, we at MAIEI are proud to be continuing his legacy in our daily work, interactions, and discussions.</p><p>This year saw the return of the <a href="https://montrealethics.ai/saier-vol-7-landing/">SAIER Volume 7</a>, which we have been commenting on during the past few editions of the Brief. At its core, we focused on community-led insights, revealing rich and insightful commentary on the impact of AI on different sectors. With 58 external contributors over 17 chapters, we saw the true value of the work we do: <strong>our community</strong>. For that, we are truly thankful to all who have contributed, and to all those continuing to read the report.</p><p>Nevertheless, the AI ethics wheel keeps on turning. From major developments in LLMs and AI agents to chatbot companions and new AI regulation, there is never a dull moment in this space. These constant updates can cause confusion, stress, and worry, but they also spark necessary conversations about how best to manage and mitigate their consequences. It is these conversations that will ultimately determine how the future of AI pans out, and we are raring to go again in the New Year to help make sense of it.</p><p>From all of us at MAIEI, we wish you a very happy festive period, and we look forward to returning with more Briefs, more commentary, and more energy in 2026.</p><div><hr></div><h3><strong>&#127881; </strong><em><strong>SAIER </strong></em><strong>is back - Part IV: Emerging Technologies</strong></h3><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!zuCh!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc68a37f5-0861-4390-a577-f042e97136df_1456x137.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!zuCh!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc68a37f5-0861-4390-a577-f042e97136df_1456x137.jpeg 424w, https://substackcdn.com/image/fetch/$s_!zuCh!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc68a37f5-0861-4390-a577-f042e97136df_1456x137.jpeg 848w, https://substackcdn.com/image/fetch/$s_!zuCh!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc68a37f5-0861-4390-a577-f042e97136df_1456x137.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!zuCh!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc68a37f5-0861-4390-a577-f042e97136df_1456x137.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!zuCh!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc68a37f5-0861-4390-a577-f042e97136df_1456x137.jpeg" width="1456" height="137" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c68a37f5-0861-4390-a577-f042e97136df_1456x137.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:137,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!zuCh!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc68a37f5-0861-4390-a577-f042e97136df_1456x137.jpeg 424w, https://substackcdn.com/image/fetch/$s_!zuCh!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc68a37f5-0861-4390-a577-f042e97136df_1456x137.jpeg 848w, https://substackcdn.com/image/fetch/$s_!zuCh!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc68a37f5-0861-4390-a577-f042e97136df_1456x137.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!zuCh!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc68a37f5-0861-4390-a577-f042e97136df_1456x137.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p>On November 4th, 2025, we launched <em><a href="https://montrealethics.ai/saier-vol-7-landing/">SAIER Volume 7</a></em>, a snapshot of the global AI ethics arena, with perspectives from Canada, the US, Africa, Asia and Europe. Of course, not all voices could be captured, but <em>SAIER Volume 7</em> presents a holistic understanding of the current state of play in the world of AI ethics. Part IV of the report serves as a primer on emerging applications of AI, ranging from the military to agentic AI, and to democratic AI practices.</p><p>Metaphors about AI&#8212;its &#8220;black-box&#8221; nature and the &#8220;AI arms race&#8221;&#8212;are informing very real geopolitical strategies that diminish any form of accountability. In the opening of Chapter 12 on military AI and autonomous weapons, <strong>Ayaz Syed</strong> (The Dais) paints a clear picture of how AI impacts military contexts. Firstly, national defence strategies emphasize the need to invest more in AI. Secondly, the now literal AI arms race means compressing decision windows, reducing complex risk evaluations to much quicker forms of automated calculus. Thirdly, opaque industrial and financial agreements are being broached behind closed doors, circumventing the robust governance mechanisms of more traditional procurement practices. Finally, global coalitions are having to form to protect human rights. </p><p>This final point is expanded on by <strong>Kirthi Jayakumar</strong> (civitatem resolutions), who introduces a variety of initiatives, including <a href="https://www.stopkillerrobots.org">Stop Killer Robots</a>, <a href="https://www.bdsmovement.net/no-tech-apartheid">No Tech for Apartheid</a>, <a href="https://notechfortyrants.org">No Tech for Tyrants</a>, the <a href="https://www.wilpf.org">Women&#8217;s International League for Peace and Freedom</a>, <a href="https://www.derechosdigitales.org/en/home/">Derechos Digitales</a> and <a href="https://www.gp-digital.org">Global Partners Digital</a>. In this context, AI ethics advocates sometimes appear in surprising places, such as the Catholic Pope, who <a href="https://www.independent.co.uk/news/world/europe/pope-leo-nationalism-religion-violence-ai-b2886942.html">stated last week</a>, &#8220;there is ... a growing tendency among political and military leaders to shirk responsibility, as decisions about life and death are increasingly &#8216;delegated&#8217; to machines.&#8221;</p><p>What happens when that &#8220;delegation&#8221; is taken to the fullest? What happens when AI tools not only perform specific tasks or inform specific decisions, but also run entire workflows and even interact with one another? Chapter 13 introduces &#8220;AI agents&#8221; and &#8220;agentic AI systems,&#8221; which <strong>Renjie Butalid</strong> (MAIEI) defines respectively as &#8220;narrowly scoped systems that automate specific tasks through tool integration and structured prompts,&#8221; and &#8220;systems that orchestrate multiple specialized agents, maintain persistent memory across sessions, decompose objectives into subtasks, and operate in self-coordinating ways.&#8221; The term itself seems confusing, ascribing what seems to be human agency to AI systems. Relatedly, Butalid cites evidence showing that &#8220;augmentation&#8221; is what workers want AI agents to do, not &#8220;replacement.&#8221; This provides a neat backdrop to the ongoing <a href="https://www.techradar.com/computing/windows/microsoft-explains-how-windows-11s-ai-agents-will-get-access-to-your-files-but-bigger-worries-remain">backlash against Microsoft </a>who has been touting the launch of AI agents in Windows 11. <strong>Kathy Baxter </strong>(Salesforce) concludes Chapter 13 with a critique, highlighting the need for traditional AI model governance to shift towards the governance of AI agents. What&#8217;s more, Baxter posits an important question: &#8220;when is full autonomy appropriate, and when must humans remain in the loop?&#8221;</p><p>During this boom of AI innovations, many have seen a need to reclaim the agency that seems to be being handed over to machines. Chapter 15 is on this process of reclaiming, beginning with examples of cooperative governance models described by <strong>Jonathan van Geuns</strong>. As they explain, communities must be involved in decisions about AI when AI technologies rely on the water, power and land people need. <strong>David Atkinson</strong> (Georgetown University) follows with a critique of a common view on &#8220;democratizing&#8221; AI: that making models &#8220;open&#8221; is good enough. Atkinson points out a key flaw in this logic, as making extremely technical content available to the masses does not mean that the masses can make much sense of that content. <strong>Ismael Kherroubi Garcia</strong> (Kairoi, RAIN &amp; MAIEI) provides a further critique of &#8220;openness,&#8221; as the term has been co-opted by Big Tech, whereby &#8220;open source&#8221; has come to mean sharing models&#8217; weights, and not what makes a model open: the capacity to reuse, study, modify and share it.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://montrealethics.ai/saier-vol-7-landing/&quot;,&quot;text&quot;:&quot;Find the SAIER here&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://montrealethics.ai/saier-vol-7-landing/"><span>Find the SAIER here</span></a></p><div><hr></div><h3><strong>&#128173; Insights &amp; Perspectives:</strong></h3><div><hr></div><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!zuCh!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc68a37f5-0861-4390-a577-f042e97136df_1456x137.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!zuCh!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc68a37f5-0861-4390-a577-f042e97136df_1456x137.jpeg 424w, https://substackcdn.com/image/fetch/$s_!zuCh!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc68a37f5-0861-4390-a577-f042e97136df_1456x137.jpeg 848w, https://substackcdn.com/image/fetch/$s_!zuCh!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc68a37f5-0861-4390-a577-f042e97136df_1456x137.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!zuCh!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc68a37f5-0861-4390-a577-f042e97136df_1456x137.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!zuCh!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc68a37f5-0861-4390-a577-f042e97136df_1456x137.jpeg" width="1456" height="137" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c68a37f5-0861-4390-a577-f042e97136df_1456x137.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:137,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!zuCh!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc68a37f5-0861-4390-a577-f042e97136df_1456x137.jpeg 424w, https://substackcdn.com/image/fetch/$s_!zuCh!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc68a37f5-0861-4390-a577-f042e97136df_1456x137.jpeg 848w, https://substackcdn.com/image/fetch/$s_!zuCh!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc68a37f5-0861-4390-a577-f042e97136df_1456x137.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!zuCh!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc68a37f5-0861-4390-a577-f042e97136df_1456x137.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><h4><strong>Op-ed: Agentic AI Systems</strong></h4><p>In this op-ed, Sun Gyoo Kang (<em>Law and Ethics in Tech</em>) comments on the new phase of e-commerce marked by the introduction of AI agents. He highlights the reorientation of common marketing strategies from influencing human psychology to machine logic as more tasks get delegated to AI agents. This shift will bring significant changes and risks, with businesses facing new forms of liability concerns and cybersecurity vulnerabilities.</p><p><em>To dive deeper, <strong><a href="https://montrealethics.ai/agentic-ai-systems-and-algorithmic-accountability-a-new-era-of-e-commerce/">read the full article here</a></strong><a href="https://montrealethics.ai/agentic-ai-systems-and-algorithmic-accountability-a-new-era-of-e-commerce/">.</a></em></p><div><hr></div><h3><strong>&#10084;&#65039; Support Our Work</strong></h3><div><hr></div><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!HbFT!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcde391e3-3c56-4e53-9b34-07cc9b2a1aad_1456x213.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!HbFT!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcde391e3-3c56-4e53-9b34-07cc9b2a1aad_1456x213.jpeg 424w, https://substackcdn.com/image/fetch/$s_!HbFT!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcde391e3-3c56-4e53-9b34-07cc9b2a1aad_1456x213.jpeg 848w, https://substackcdn.com/image/fetch/$s_!HbFT!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcde391e3-3c56-4e53-9b34-07cc9b2a1aad_1456x213.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!HbFT!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcde391e3-3c56-4e53-9b34-07cc9b2a1aad_1456x213.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!HbFT!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcde391e3-3c56-4e53-9b34-07cc9b2a1aad_1456x213.jpeg" width="1456" height="213" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/cde391e3-3c56-4e53-9b34-07cc9b2a1aad_1456x213.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:213,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!HbFT!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcde391e3-3c56-4e53-9b34-07cc9b2a1aad_1456x213.jpeg 424w, https://substackcdn.com/image/fetch/$s_!HbFT!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcde391e3-3c56-4e53-9b34-07cc9b2a1aad_1456x213.jpeg 848w, https://substackcdn.com/image/fetch/$s_!HbFT!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcde391e3-3c56-4e53-9b34-07cc9b2a1aad_1456x213.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!HbFT!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcde391e3-3c56-4e53-9b34-07cc9b2a1aad_1456x213.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p>Help us keep <em><strong>The AI Ethics Brief</strong></em> free and accessible for everyone by becoming a paid subscriber on Substack or making a donation at<a href="https://montrealethics.ai/donate"> </a><strong><a href="https://montrealethics.ai/donate">montrealethics.ai/donate</a>. </strong>Your support sustains our mission of democratizing AI ethics literacy and honours<a href="https://montrealethics.ai/open-letter-moving-forward-together/"> </a><strong><a href="https://montrealethics.ai/open-letter-moving-forward-together/">Abhishek Gupta&#8217;s</a></strong> legacy.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://brief.montrealethics.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://brief.montrealethics.ai/subscribe?"><span>Subscribe now</span></a></p><p>For corporate partnerships or larger contributions, please contact us at <strong>support@montrealethics.ai</strong></p><div><hr></div><h3><strong>&#9989; Take Action:</strong></h3><div><hr></div><p>Have an article, research paper, or news item we should feature? <strong>Leave us a comment below</strong> &#8212; we&#8217;d love to hear from you!</p>]]></content:encoded></item><item><title><![CDATA[The AI Ethics Brief #179: Seen, But Not Heard: AI's Impact On Labour]]></title><description><![CDATA[We further explore our State of AI Ethics Report with Part III: Sectoral Applications while highlighting the latest addition to our AI Policy Corner on Brazil's newest AI-related bill]]></description><link>https://brief.montrealethics.ai/p/the-ai-ethics-brief-178-seen-but</link><guid isPermaLink="false">https://brief.montrealethics.ai/p/the-ai-ethics-brief-178-seen-but</guid><dc:creator><![CDATA[Montreal AI Ethics Institute]]></dc:creator><pubDate>Tue, 09 Dec 2025 15:03:13 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!XjEc!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb67ff9e8-1328-497a-8524-9c6293119226_1280x720.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>Welcome to <strong>The AI Ethics Brief</strong>, a bi-weekly publication by the Montreal AI Ethics Institute. We publish <strong>every other Tuesday at 10 AM ET.</strong> Follow MAIEI on<a href="https://bsky.app/profile/mtlaiethics.bsky.social"> Bluesky</a> and<a href="https://www.linkedin.com/company/mtlaiethics"> LinkedIn.</a></em></p><p>&#10084;&#65039;<a href="https://montrealethics.ai/donate/"> Support Our Work.</a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://brief.montrealethics.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://brief.montrealethics.ai/subscribe?"><span>Subscribe now</span></a></p><div><hr></div><h3><strong>&#128204; Editor&#8217;s Note</strong></h3><div><hr></div><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!XjEc!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb67ff9e8-1328-497a-8524-9c6293119226_1280x720.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!XjEc!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb67ff9e8-1328-497a-8524-9c6293119226_1280x720.png 424w, https://substackcdn.com/image/fetch/$s_!XjEc!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb67ff9e8-1328-497a-8524-9c6293119226_1280x720.png 848w, https://substackcdn.com/image/fetch/$s_!XjEc!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb67ff9e8-1328-497a-8524-9c6293119226_1280x720.png 1272w, https://substackcdn.com/image/fetch/$s_!XjEc!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb67ff9e8-1328-497a-8524-9c6293119226_1280x720.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!XjEc!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb67ff9e8-1328-497a-8524-9c6293119226_1280x720.png" width="1280" height="720" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/b67ff9e8-1328-497a-8524-9c6293119226_1280x720.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:720,&quot;width&quot;:1280,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:190611,&quot;alt&quot;:&quot;A person with their hands on a laptop keyboard is looking at something happening over their screen with a worried expression. They are white, have shoulder length dark hair and wear a green t-shirt. The overall image is illustrated in a warm, sketchy, cartoon style.  Floating in front of the person are three small green illustrations representing different industries, which is what they are looking at.  On the left is a hospital building, in the middle is a bus, and on the right is a siren with small lines coming off it to indicate that it is flashing or making noise.  Between the person and the images representing industries is a small character representing artificial intelligence made of lines and circles in green and red (like nodes and edges on a graph) who is standing with its &#8216;arms&#8217; and &#8216;legs&#8217; stretched out, and two antenna sticking up.  A similar patten of nodes and edges is on the laptop screen in front of the person, as though the character has jumped out of their screen.  The overall image makes it look as though the person is worried the AI character might approach and interfere with one of the industry icons.&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://brief.montrealethics.ai/i/181025583?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb67ff9e8-1328-497a-8524-9c6293119226_1280x720.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="A person with their hands on a laptop keyboard is looking at something happening over their screen with a worried expression. They are white, have shoulder length dark hair and wear a green t-shirt. The overall image is illustrated in a warm, sketchy, cartoon style.  Floating in front of the person are three small green illustrations representing different industries, which is what they are looking at.  On the left is a hospital building, in the middle is a bus, and on the right is a siren with small lines coming off it to indicate that it is flashing or making noise.  Between the person and the images representing industries is a small character representing artificial intelligence made of lines and circles in green and red (like nodes and edges on a graph) who is standing with its &#8216;arms&#8217; and &#8216;legs&#8217; stretched out, and two antenna sticking up.  A similar patten of nodes and edges is on the laptop screen in front of the person, as though the character has jumped out of their screen.  The overall image makes it look as though the person is worried the AI character might approach and interfere with one of the industry icons." title="A person with their hands on a laptop keyboard is looking at something happening over their screen with a worried expression. They are white, have shoulder length dark hair and wear a green t-shirt. The overall image is illustrated in a warm, sketchy, cartoon style.  Floating in front of the person are three small green illustrations representing different industries, which is what they are looking at.  On the left is a hospital building, in the middle is a bus, and on the right is a siren with small lines coming off it to indicate that it is flashing or making noise.  Between the person and the images representing industries is a small character representing artificial intelligence made of lines and circles in green and red (like nodes and edges on a graph) who is standing with its &#8216;arms&#8217; and &#8216;legs&#8217; stretched out, and two antenna sticking up.  A similar patten of nodes and edges is on the laptop screen in front of the person, as though the character has jumped out of their screen.  The overall image makes it look as though the person is worried the AI character might approach and interfere with one of the industry icons." srcset="https://substackcdn.com/image/fetch/$s_!XjEc!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb67ff9e8-1328-497a-8524-9c6293119226_1280x720.png 424w, https://substackcdn.com/image/fetch/$s_!XjEc!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb67ff9e8-1328-497a-8524-9c6293119226_1280x720.png 848w, https://substackcdn.com/image/fetch/$s_!XjEc!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb67ff9e8-1328-497a-8524-9c6293119226_1280x720.png 1272w, https://substackcdn.com/image/fetch/$s_!XjEc!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb67ff9e8-1328-497a-8524-9c6293119226_1280x720.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h6>Credit: Yasmin Dwiputri &amp; Data Hazards Project / <a href="https://betterimagesofai.org">Better Images of AI</a> / <a href="https://creativecommons.org/licenses/by/4.0/">CC BY 4.0</a></h6><h4><strong>In this Edition (TL;DR)</strong></h4><ul><li><p><strong>Sectoral Applications:</strong> In Part III of the <em>State of AI Ethics Report </em>(SAIER), we explore some of the consequences of AI&#8217;s adoption across different sectors, including healthcare, education, oil and gas, and the arts.</p></li><li><p><strong>Brazil&#8217;s AI regulation proposal:</strong> This week, our AI Policy Corner with GRAIL at Purdue University analyzes Brazil&#8217;s latest effort in the regulation of AI, establishing rules for &#8220;excessively risky&#8221; and &#8220;high-risk&#8221; AI systems.</p></li></ul><p><strong>What connects these stories:</strong></p><p>There is no doubt that AI is being developed, deployed and adopted in ways that significantly shape industries and workplaces. However, what happens when those impacts are negative? In <a href="https://substack.com/home/post/p-179803936">Brief #178</a>, we spoke of pockets of resistance emerging before centralized forms of AI governance that don&#8217;t respond to the needs of the people. Yet, top-down policies remain a core tenet of modern-day society. Brazil&#8217;s AI bill, currently undergoing the machinations of government bureaucracy, is one such policy. However, the bill does make provisions relevant to some of the areas discussed in Part III of the <em>SAIER</em>.</p><p>Specifically, Brazil&#8217;s bill makes provisions for artists and creatives whose work is being digitally encoded into generative AI tools without their consent, without compensating them, and without giving them any means of control. Indeed, the bill received support just last month from <a href="https://www.fim-musicians.org/the-fim-glm-supports-the-brazilian-ai-bill/">the International Federation of Musicians&#8217; (FIM) Latin American Regional Group</a>, who are calling for other Latin American governments to follow suit. FIM, which represents musicians&#8217; trade unions, guilds and associations, serves as a case study of grassroots, workers-first action.</p><p>Meanwhile, in Part III of the <em>SAIER</em>, we learn of the risks posed by AI to creatives, the need for collective licensing agreements, and the potential for collective action. As it turns out, even in healthcare, trade unions and professional bodies have had to take a stand against technosolutionist agendas to protect both patients and healthcare professionals. Thus, what connects this week&#8217;s stories is the people who are self-organizing to fight for an AI ecosystem that engages meaningfully with the experiences of workers across industries.</p><div><hr></div><h3><strong>&#127881; </strong><em><strong>SAIER </strong></em><strong>is back - Part III: Sectoral Applications</strong></h3><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!zuCh!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc68a37f5-0861-4390-a577-f042e97136df_1456x137.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!zuCh!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc68a37f5-0861-4390-a577-f042e97136df_1456x137.jpeg 424w, https://substackcdn.com/image/fetch/$s_!zuCh!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc68a37f5-0861-4390-a577-f042e97136df_1456x137.jpeg 848w, https://substackcdn.com/image/fetch/$s_!zuCh!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc68a37f5-0861-4390-a577-f042e97136df_1456x137.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!zuCh!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc68a37f5-0861-4390-a577-f042e97136df_1456x137.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!zuCh!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc68a37f5-0861-4390-a577-f042e97136df_1456x137.jpeg" width="1456" height="137" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c68a37f5-0861-4390-a577-f042e97136df_1456x137.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:137,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!zuCh!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc68a37f5-0861-4390-a577-f042e97136df_1456x137.jpeg 424w, https://substackcdn.com/image/fetch/$s_!zuCh!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc68a37f5-0861-4390-a577-f042e97136df_1456x137.jpeg 848w, https://substackcdn.com/image/fetch/$s_!zuCh!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc68a37f5-0861-4390-a577-f042e97136df_1456x137.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!zuCh!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc68a37f5-0861-4390-a577-f042e97136df_1456x137.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p>On November 4th, 2025, we launched <em><a href="https://montrealethics.ai/saier-vol-7-landing/">SAIER Volume 7</a></em>, a snapshot of the global AI ethics arena, with perspectives from Canada, the US, Africa, Asia and Europe. Of course, not all voices could be captured, but <em>SAIER Volume 7</em> presents a grounded outlook on the current state of play in the world of AI ethics.</p><p>Healthcare and medical sciences are increasingly primed for AI applications. With personalized medicine coming into vogue, the potential for AI algorithms to parse enormous datasets and generate insights is becoming invaluable. One way to organize such data about individual patients is through &#8220;digital twins&#8221;; computational representations of live data streams from, in our case, a patient. <strong>Rosa E. Mart&#237;n-Pe&#241;a</strong> (<strong>Leibniz University Hannover</strong>) explains in Chapter 8 the role digital twins can play in making healthcare more dynamic, not limited to routine appointments or emergency check-ups, but drawing on consistently up-to-date data. The cost of digital twins in healthcare then becomes the &#8220;quiet erosion of patient privacy,&#8221; to paraphrase Mart&#237;n-Pe&#241;a, who adds to this the potential to undermine patients&#8217; testimonies before computational data.</p><p>Indeed, more critical approaches to AI are needed in healthcare settings, and <strong>Zoya Yasmine</strong> (<strong>University of Oxford</strong>) explains the role that trade unions have played in protecting patient safety and professional accountability in the UK. In particular, Yasmine introduces instances in which the British Medical Association and the Royal College of General Practitioners denounced &#8220;untried and untested&#8221; AI systems in medical settings.</p><p>Chapter 9 of the <em>SAIER</em> takes us away from AI in healthcare to AI in education, a key target for many AI chatbot and LLM developers. <a href="https://openai.com/index/estonia-schools-and-chatgpt/">Estonia</a>, <a href="https://www.reuters.com/technology/greece-openai-agree-deal-boost-innovation-schools-small-businesses-2025-09-05/">Greece</a> and <a href="https://www.prnewswire.com/news-releases/openai-and-freedom-holding-corp-to-advance-digital-education-in-kazakhstan-302608831.html">Kazakhstan</a> are among those signing agreements with OpenAI for education sector-wide implementations of ChatGPT, while <a href="https://www.anthropic.com/news/anthropic-and-iceland-announce-one-of-the-world-s-first-national-ai-education-pilots">Iceland</a> and <a href="https://www.anthropic.com/news/rwandan-government-partnership-ai-education">Rwanda</a> have opted for Anthropic&#8217;s Claude. However, <strong>Tamas Makany</strong> and <strong>Ivy Seow</strong> (<strong>Singapore Management University</strong>) remind us that what really matters is the creation of learning environments that are welcoming to, well, learning opportunities. As they explain, &#8220;classrooms once served as backstage rehearsals where mistakes and questions were welcome. Now, ChatGPT fills this space of preparation, turning class participation into front-stage performances.&#8221;</p><p><strong>Colleagues from Encode Canada</strong> add to this insight how different disciplines are protecting the space for critical thinking in schools before the deluge of LLMs. In their experience, language departments are stricter at earlier education levels; STEM departments have turned from take-home exams towards in-person labs; and some humanities and social science departments have opted for  &#8220;contractual agreements&#8221; that bind the students to submitting original work.</p><p><strong>Dr Elizabeth M. Adams</strong> (<strong>Minnesota Responsible AI Institute</strong>) begins Chapter 10 by recentring the question of AI adoption on employees. Adams argues that employees should be consulted earlier in the AI procurement process. Treated as &#8220;active partners in innovation,&#8221; employees can trust that AI-related decisions are well-founded, and organizations can avoid the staff disillusionment that follows when AI tools are brought in suddenly and as a potential replacement to human labour.</p><p><strong>Ryan Burns</strong> (<strong>University of Washington Bothell</strong>) and <strong>Eliot Tretter</strong> (<strong>University of Calgary</strong>) then contextualize the impact of automation on employees in terms of its broader geographical and economic contexts. Focusing on the case of the oil and gas industry in Alberta, Canada, Burns and Tretter describe oil patches where employees are replaced by robots controlled from far-off cities, leading to &#8220;precarious, short-term, often freelance contracts.&#8221; Furthermore, disproportionate impacts may follow for First Nations communities, who comprise a large percentage of the service industry in the region.</p><p>Chapter 11 tackles AI in arts, culture and media. <strong>Katrina Ingram </strong>(<strong>Ethically Aligned AI</strong>) opens the chapter with the case of AI-generated voices for radio stations. &#8220;Thy&#8221; took to the airwaves of the Australian Radio Network (ARN) in 2024, but it wasn&#8217;t until April 2025 that it became clear the voice was AI-generated. &#8220;Thy&#8221; is an excellent example of how AI-generated content can go wrong: not only had ARN been deceitful about &#8220;Thy&#8221; not being human, but the tool was modelled on an unnamed Asian-Australian female; &#8220;Thy&#8221; was a &#8220;diversity hire&#8221; that wasn&#8217;t even human.</p><p><strong>Anna Sikorski</strong> and <strong>Kent Sikstrom</strong> (<strong>Alliance of Canadian Cinema, Television and Radio Artists; ACTRA</strong>) follow with an essay on solutions to the issue of AI-generated assets across the industry. Consent, compensation and control constitute the core principles of the collective licensing agreements for which they lobby. The first two sections of Chapter 11 provide a robust grounding to understand the <a href="https://news.sky.com/story/industrial-action-on-agenda-as-actors-balloted-by-equity-over-ai-scanning-concerns-13479308">potential for strike action by actors in the UK</a>, as their union Equity canvasses their members over AI concerns.</p><p><strong>Amanda Silvera </strong>brings the chapter to a close with a name for the ongoing automation of actors&#8217; work. Much like when the Little Mermaid, unaware of the full consequences of her decision, sacrificed her voice to live in a new world, Silvera calls the &#8220;non-consensual pact where human identity is extracted for technological progress, rarely with true understanding of the cost&#8221; the &#8220;Ursula Exchange.&#8221; A victim of the Ursula Exchange, Silvera introduces us to SOBIRTECH, &#8220;a framework designed to verify, license, and monitor the use of biometric identifiers like voice and likeness.&#8221;</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://montrealethics.ai/saier-vol-7-landing/&quot;,&quot;text&quot;:&quot;Find the SAIER here&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://montrealethics.ai/saier-vol-7-landing/"><span>Find the SAIER here</span></a></p><div><hr></div><h3><strong>&#128173; Insights &amp; Perspectives:</strong></h3><div><hr></div><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!zuCh!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc68a37f5-0861-4390-a577-f042e97136df_1456x137.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!zuCh!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc68a37f5-0861-4390-a577-f042e97136df_1456x137.jpeg 424w, https://substackcdn.com/image/fetch/$s_!zuCh!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc68a37f5-0861-4390-a577-f042e97136df_1456x137.jpeg 848w, https://substackcdn.com/image/fetch/$s_!zuCh!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc68a37f5-0861-4390-a577-f042e97136df_1456x137.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!zuCh!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc68a37f5-0861-4390-a577-f042e97136df_1456x137.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!zuCh!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc68a37f5-0861-4390-a577-f042e97136df_1456x137.jpeg" width="1456" height="137" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c68a37f5-0861-4390-a577-f042e97136df_1456x137.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:137,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!zuCh!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc68a37f5-0861-4390-a577-f042e97136df_1456x137.jpeg 424w, https://substackcdn.com/image/fetch/$s_!zuCh!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc68a37f5-0861-4390-a577-f042e97136df_1456x137.jpeg 848w, https://substackcdn.com/image/fetch/$s_!zuCh!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc68a37f5-0861-4390-a577-f042e97136df_1456x137.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!zuCh!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc68a37f5-0861-4390-a577-f042e97136df_1456x137.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><h4><strong>AI Policy Corner: How Brazil Plans to Govern AI: Reviewing PL 2338/2023</strong></h4><p>This article is part of our <a href="https://montrealethics.ai/category/insights/ai-policy-corner/">AI Policy Corner</a> series, a collaboration between the Montreal AI Ethics Institute (MAIEI) and the <a href="https://cla.purdue.edu/academic/polsci/research/labs/grail/index.html">Governance and Responsible AI Lab (GRAIL)</a> at Purdue University. The series provides concise insights into critical AI policy developments from the local to international levels, helping our readers stay informed about the evolving landscape of AI governance. This week&#8217;s piece provides an overview of Brazil&#8217;s Bill on AI regulation, PL 2338/2023.</p><p>The Bill will provide structure to the nation&#8217;s AI ecosystem, prohibiting &#8220;excessive risk&#8221; systems, and making legal requirements for &#8220;high-risk&#8221; systems. The Bill also makes key provisions for Brazilian citizens to have oversight of those AI systems that impact them, ultimately promoting transparency and accountability.</p><p><em>To dive deeper, <strong><a href="https://montrealethics.ai/ai-policy-corner-how-brazil-plans-to-govern-ai-reviewing-pl-2338-2023/">read the full article here</a></strong><a href="https://montrealethics.ai/ai-policy-corner-reviewing-ukraines-whitepaper-on-artificial-intelligence-regulation/">.</a></em></p><div><hr></div><h3><strong>&#10084;&#65039; Support Our Work</strong></h3><div><hr></div><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!HbFT!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcde391e3-3c56-4e53-9b34-07cc9b2a1aad_1456x213.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!HbFT!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcde391e3-3c56-4e53-9b34-07cc9b2a1aad_1456x213.jpeg 424w, https://substackcdn.com/image/fetch/$s_!HbFT!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcde391e3-3c56-4e53-9b34-07cc9b2a1aad_1456x213.jpeg 848w, https://substackcdn.com/image/fetch/$s_!HbFT!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcde391e3-3c56-4e53-9b34-07cc9b2a1aad_1456x213.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!HbFT!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcde391e3-3c56-4e53-9b34-07cc9b2a1aad_1456x213.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!HbFT!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcde391e3-3c56-4e53-9b34-07cc9b2a1aad_1456x213.jpeg" width="1456" height="213" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/cde391e3-3c56-4e53-9b34-07cc9b2a1aad_1456x213.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:213,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!HbFT!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcde391e3-3c56-4e53-9b34-07cc9b2a1aad_1456x213.jpeg 424w, https://substackcdn.com/image/fetch/$s_!HbFT!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcde391e3-3c56-4e53-9b34-07cc9b2a1aad_1456x213.jpeg 848w, https://substackcdn.com/image/fetch/$s_!HbFT!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcde391e3-3c56-4e53-9b34-07cc9b2a1aad_1456x213.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!HbFT!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcde391e3-3c56-4e53-9b34-07cc9b2a1aad_1456x213.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p>Help us keep <em><strong>The AI Ethics Brief</strong></em> free and accessible for everyone by becoming a paid subscriber on Substack or making a donation at<a href="https://montrealethics.ai/donate"> </a><strong><a href="https://montrealethics.ai/donate">montrealethics.ai/donate</a>. </strong>Your support sustains our mission of democratizing AI ethics literacy and honours<a href="https://montrealethics.ai/open-letter-moving-forward-together/"> </a><strong><a href="https://montrealethics.ai/open-letter-moving-forward-together/">Abhishek Gupta&#8217;s</a></strong> legacy.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://brief.montrealethics.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://brief.montrealethics.ai/subscribe?"><span>Subscribe now</span></a></p><p>For corporate partnerships or larger contributions, please contact us at <strong>support@montrealethics.ai</strong></p><div><hr></div><h3><strong>&#9989; Take Action:</strong></h3><div><hr></div><p>Have an article, research paper, or news item we should feature? <strong>Leave us a comment below</strong> &#8212; we&#8217;d love to hear from you!</p>]]></content:encoded></item><item><title><![CDATA[The AI Ethics Brief #178: How AI Regulation Can Both Harm and Foster Social Justice]]></title><description><![CDATA[We further explore our State of AI Ethics Report with Part II: Social Justice & Equity, while highlighting the latest addition to our AI Policy Corner on Ukraine's AI regulation whitepaper]]></description><link>https://brief.montrealethics.ai/p/the-ai-ethics-brief-178-how-regulation</link><guid isPermaLink="false">https://brief.montrealethics.ai/p/the-ai-ethics-brief-178-how-regulation</guid><dc:creator><![CDATA[Montreal AI Ethics Institute]]></dc:creator><pubDate>Tue, 25 Nov 2025 15:03:07 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!3Yj7!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F270aa77f-abfc-4b85-8c3d-4b9758a61dea_1920x1080.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>Welcome to <strong>The AI Ethics Brief</strong>, a bi-weekly publication by the Montreal AI Ethics Institute. We publish <strong>every other Tuesday at 10 AM ET.</strong> Follow MAIEI on<a href="https://bsky.app/profile/mtlaiethics.bsky.social"> Bluesky</a> and<a href="https://www.linkedin.com/company/mtlaiethics"> LinkedIn.</a></em></p><p>&#10084;&#65039;<a href="https://montrealethics.ai/donate/"> Support Our Work.</a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://brief.montrealethics.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://brief.montrealethics.ai/subscribe?"><span>Subscribe now</span></a></p><div><hr></div><h3><strong>&#128204; Editor&#8217;s Note</strong></h3><div><hr></div><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!3Yj7!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F270aa77f-abfc-4b85-8c3d-4b9758a61dea_1920x1080.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!3Yj7!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F270aa77f-abfc-4b85-8c3d-4b9758a61dea_1920x1080.png 424w, https://substackcdn.com/image/fetch/$s_!3Yj7!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F270aa77f-abfc-4b85-8c3d-4b9758a61dea_1920x1080.png 848w, https://substackcdn.com/image/fetch/$s_!3Yj7!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F270aa77f-abfc-4b85-8c3d-4b9758a61dea_1920x1080.png 1272w, https://substackcdn.com/image/fetch/$s_!3Yj7!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F270aa77f-abfc-4b85-8c3d-4b9758a61dea_1920x1080.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!3Yj7!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F270aa77f-abfc-4b85-8c3d-4b9758a61dea_1920x1080.png" width="1456" height="819" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/270aa77f-abfc-4b85-8c3d-4b9758a61dea_1920x1080.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:819,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:3340960,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://brief.montrealethics.ai/i/179803936?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F270aa77f-abfc-4b85-8c3d-4b9758a61dea_1920x1080.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!3Yj7!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F270aa77f-abfc-4b85-8c3d-4b9758a61dea_1920x1080.png 424w, https://substackcdn.com/image/fetch/$s_!3Yj7!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F270aa77f-abfc-4b85-8c3d-4b9758a61dea_1920x1080.png 848w, https://substackcdn.com/image/fetch/$s_!3Yj7!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F270aa77f-abfc-4b85-8c3d-4b9758a61dea_1920x1080.png 1272w, https://substackcdn.com/image/fetch/$s_!3Yj7!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F270aa77f-abfc-4b85-8c3d-4b9758a61dea_1920x1080.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h6>Credit: Clarote &amp; AI4Media / <a href="https://betterimagesofai.org/">https://betterimagesofai.org</a> / <a href="https://creativecommons.org/licenses/by/4.0/">https://creativecommons.org/licenses/by/4.0/</a></h6><h4><strong>In this Edition (TL;DR)</strong></h4><ul><li><p><strong>Social Justice &amp; Equity: </strong>This week, we dig deeper into Part II of the <em>SAIER, Volume 7</em>, featuring insights from experts across Canada, the US and the UK. We explore how some nefarious actors exploit AI to promote disinformation, how pockets of resistance are forming to fight for algorithmic justice, and how AI infrastructure is being developed at the detriment of the natural environment.</p></li><li><p><strong>Ukraine&#8217;s AI regulation proposal:</strong> This week, our <em><a href="https://montrealethics.ai/category/insights/ai-policy-corner/">AI Policy Corner</a></em> with GRAIL at Purdue University analyzes the two-stage process to AI regulation and development outlined by Ukraine&#8217;s 2024 whitepaper, explaining how these two stages connect into the broader geopolitical context.</p></li></ul><p><strong>What connects these stories:</strong></p><p>From the internet, GPS, Apple&#8217;s AI assistant Siri, WD-40, to drones and more, many commonplace technologies and products today have their origins in military settings. It is no surprise, then, that the ongoing war between Russia and Ukraine has informed <a href="https://storage.thedigital.gov.ua/files/c/fc/36c4cae89deedfbf3781ec6bceddffcc.pdf">the Ukrainian regulatory stance on AI</a>: restricting the use of AI in the defence sector will only put Ukraine in a &#8220;less favourable position compared to the aggressor [Russia],&#8221; which plans to continue to use AI in its military applications.</p><p>The above example, and Ukraine&#8217;s AI whitepaper in general, emphasize the global context of top-down AI governance initiatives that are shaped in different directions according to different parties. In recent weeks, we have seen this unfolding in global policy, whereby the EU has been criticized for <a href="https://www.amnesty.org/en/latest/news/2025/11/eu-simplification-throwing-human-rights-under-the-omnibus/">undermining</a> the AI Act and leaning towards corporate interests.</p><p>Nevertheless, a swathe of grassroots-driven resistance initiatives is emerging in front of these top-down governance mechanisms. These initiatives are varied in nature to reflect the wide influence of AI systems in different sectors such as surveillance, health, education and more. They seek to start asking <em>why </em>these AI applications are so widespread, lauded as a series of optimization tools that streamline otherwise slow and inefficient systems and institutions. Examining these applications with a critical lens are the topics Part II of the <em>SAIER</em> tackles, highlighting the diversity of responses to AI-enabled injustices.</p><div><hr></div><h3><strong>&#127881; </strong><em><strong>SAIER </strong></em><strong>is back - Part II: Social Justice &amp; Equity</strong></h3><div><hr></div><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!zuCh!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc68a37f5-0861-4390-a577-f042e97136df_1456x137.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!zuCh!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc68a37f5-0861-4390-a577-f042e97136df_1456x137.jpeg 424w, https://substackcdn.com/image/fetch/$s_!zuCh!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc68a37f5-0861-4390-a577-f042e97136df_1456x137.jpeg 848w, https://substackcdn.com/image/fetch/$s_!zuCh!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc68a37f5-0861-4390-a577-f042e97136df_1456x137.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!zuCh!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc68a37f5-0861-4390-a577-f042e97136df_1456x137.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!zuCh!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc68a37f5-0861-4390-a577-f042e97136df_1456x137.jpeg" width="1456" height="137" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c68a37f5-0861-4390-a577-f042e97136df_1456x137.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:137,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!zuCh!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc68a37f5-0861-4390-a577-f042e97136df_1456x137.jpeg 424w, https://substackcdn.com/image/fetch/$s_!zuCh!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc68a37f5-0861-4390-a577-f042e97136df_1456x137.jpeg 848w, https://substackcdn.com/image/fetch/$s_!zuCh!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc68a37f5-0861-4390-a577-f042e97136df_1456x137.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!zuCh!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc68a37f5-0861-4390-a577-f042e97136df_1456x137.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p>On November 4th, 2025, we launched <em><a href="https://montrealethics.ai/saier-vol-7-landing/">SAIER Volume 7</a></em>, a snapshot of the global AI ethics arena, with perspectives from Canada, the US, Africa, Asia and Europe. Of course, not all voices could be captured, but <em>SAIER Volume 7</em> presents a grounded outlook on the current state of play in the world of AI ethics.</p><p>Part II tackles the enormous topic of social justice, reminding us that AI technologies are not created nor used in a vacuum but are laden with human values that lead to tangible social consequences. AI technologies thus embody a range of threats to, and opportunities for, social institutions. Just this month, we have seen the culmination of an electoral process seeped in &#8220;AI slop&#8221;: the election of New York City&#8217;s mayor, Zohran Mamdani. Indeed, two other contenders for the position had <a href="https://www.theguardian.com/us-news/2025/nov/13/ai-campaign-videos-elections">used generative AI tools in different ways</a>, from writing up housing plans to calling potential voters in different languages with the contender&#8217;s voice to producing racist videos targeted at Mamdani and people of colour.</p><p>In light of this event, <strong>Rachel Adams from the University of Cambridge</strong> explains at the start of Chapter 4 that democracy is under attack. Specifically, <strong>Adams </strong>describes attacks on election processes enabled by &#8220;AI slop&#8221; across the Global South, and a response from Big Tech that culminated in the 2024 <em><a href="https://securityconference.org/en/aielectionsaccord/">AI Elections Accord</a></em>, which critics called &#8220;voluntary and uneven.&#8221; In response to Big Tech&#8217;s inaction, fact-checking networks have emerged, with <a href="https://africacheck.org">Africa Check</a> leading the way. Other efforts harness technology to further this effort. <strong>Linda Solomon Wood</strong>, from <strong>Canada&#8217;s National Observer</strong>, introduces us to &#8220;<a href="https://civicsearchlight.nationalobserver.com/#/register">Civic Searchlight</a>,&#8221; a tool that analyzes transcripts of municipal meetings across Canada and uncovers patterns. The tool has been valuable to both identify misinformation campaigns and municipalities&#8217; different approaches to tackling similar issues, thereby enabling potential new partnerships and collaborations. <strong>Mozilla Foundation&#8217;s Seher Shafiq</strong> subsequently offers a word of caution against automating electoral processes, explaining how generative AI tools disengage marginalised voters.</p><p>Chapter 5 digs deeper into three areas of AI-enabled injustice: state policy, surveillance and education. Before each area, pockets of resistance are forming. <strong>Blair Attard-Frost</strong>, from the <strong>University of Alberta &amp; Alberta Machine Intelligence Institute</strong>, challenges state-led, top-down &#8220;governance fixes.&#8221; Indeed, evolving stories across governments, such as the UK&#8217;s &#8220;<a href="https://www.gov.uk/government/news/ai-to-power-national-renewal-as-government-announces-billions-of-additional-investment-and-new-plans-to-boost-uk-businesses-jobs-and-innovation">AI growth zones</a>&#8221; and the EU&#8217;s <a href="https://ec.europa.eu/commission/presscorner/detail/en/ip_25_2718">simplification of digital rules</a>, highlight how corporate interests are being protected by traditional governance processes. <strong>Attard-Frost</strong> introduces decentralized approaches that counteract the ongoing direction of travel across policing, the arts and labour. <strong>Jess Reia</strong>, <strong>Digital Technology for Democracy Lab (University of Virginia)</strong>, follows on by arguing against traditional top-down governance mechanisms, which have come to target transgender individuals and activism specifically via facial recognition systems, content moderation policies and surveillance technologies. <strong>Adnan Akbar</strong>, from<strong> tekniti.ai</strong>, then offers an example from the education sector with an essay on students&#8217; experiences of AI solutions at university. As it turns out, large language models may only have a limited role in supporting students, who ultimately benefit from conversations with tutors. <strong>Akbar </strong>provides a valuable backdrop for understanding <a href="https://www.theguardian.com/education/2025/nov/20/university-of-staffordshire-course-taught-in-large-part-by-ai-artificial-intelligence">last week&#8217;s story</a> about a UK government-funded apprenticeship designed to help students become cybersecurity experts or software engineers, which only caused frustration when they were constantly exposed to AI-generated materials.</p><p>Further case studies are presented by the contributors to Chapter 6 on AI surveillance, privacy, and human rights. <strong>Maria Lungu,</strong> from the <strong>University of Virginia</strong>, shares grassroots initiatives led by citizens who have opposed or sought to inform AI applications in public services in the US. <strong>Roxana Akhmetova</strong>, from the <strong>University of Oxford</strong>, provides a glimpse of the global scale of the problem at hand, listing digital surveillance systems implemented in 2025 across Australia, China, Costa Rica, the EU, Kenya, Mexico, Nigeria, the UK, the US, Vietnam and Zimbabwe. <strong>Jake Wildman-Sisk</strong>, a lawyer, then provides a case study from Canada, where a ruling against facial recognition tool developer Clearview has been interpreted differently by different bodies, creating an uneven policy landscape in Canada.</p><p>Underpinning all of these discussions around AI technologies is its very material reality. Consequently, Chapter 7 tackles the crucial issue of the environmental impacts of AI technologies and infrastructures. <strong>Burkhard Mausberg</strong> and <strong>Shay Kennedy</strong>, from <strong>Small Change Fund</strong>, provide data about the electricity and water usage, and CO<sub>2</sub> emissions of data centres, which provide AI models with the infrastructure they need to run. Novel hyperscale data centres, they suggest, may even reshape rural landscapes. Among other figures, they highlight that<a href="https://www.iea.org/reports/energy-and-ai"> the International Energy Agency projects</a> that data centres will emit 1 to 1.4% of global CO<sub>2</sub> emissions in the next decade. <strong>Trisha Ray</strong>, from <strong>Atlantic Council</strong>, expands the ecosystem impacted by AI. <strong>Ray </strong>highlights the upstream supply chain of AI, which relies on the mining of critical minerals and the pertinent transportation infrastructure. As a result, instances of local resistance are on the rise. Toxic waste is added to the mix of environmental impacts in <strong>Priscila Chaves Mart&#237;nez&#8217;s </strong>analysis. <a href="https://doi.org/10.1016/j.hazadv.2025.100873">Health studies</a> have shown a link between e-waste sites and greater infant mortality in Sub-Saharan Africa, whilst <a href="https://elpais.com/tecnologia/2025-08-09/el-patio-trasero-de-la-ia-un-mapa-de-la-fiebre-del-oro-del-siglo-xxi.html">journalists have denounced</a> the dynamics of power and corruption that let hyperscalers Microsoft, Amazon and Google skip environmental impact reports during droughts in Chile, Mexico and Spain.</p><p>In sum, Part II of the SAIER challenges the disconnect between how AI is perceived and what AI is. The power dynamics and narratives that fuel AI investments and excitement are coming under scrutiny. Decentralized, grassroots movements are forming to challenge instances of AI-mediated injustice. Whether they focus on impacts on trans rights, the spread of misinformation, impacts on workers, the change of rural landscapes or the destruction of natural ecosystems, these movements are central to advancing AI technologies that protect human rights and human flourishing, rather than forming part of their corrosion.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://montrealethics.ai/saier-vol-7-landing/&quot;,&quot;text&quot;:&quot;Find the SAIER here&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://montrealethics.ai/saier-vol-7-landing/"><span>Find the SAIER here</span></a></p><div><hr></div><h3><strong>&#128173; Insights &amp; Perspectives:</strong></h3><div><hr></div><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!zuCh!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc68a37f5-0861-4390-a577-f042e97136df_1456x137.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!zuCh!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc68a37f5-0861-4390-a577-f042e97136df_1456x137.jpeg 424w, https://substackcdn.com/image/fetch/$s_!zuCh!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc68a37f5-0861-4390-a577-f042e97136df_1456x137.jpeg 848w, https://substackcdn.com/image/fetch/$s_!zuCh!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc68a37f5-0861-4390-a577-f042e97136df_1456x137.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!zuCh!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc68a37f5-0861-4390-a577-f042e97136df_1456x137.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!zuCh!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc68a37f5-0861-4390-a577-f042e97136df_1456x137.jpeg" width="1456" height="137" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c68a37f5-0861-4390-a577-f042e97136df_1456x137.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:137,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!zuCh!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc68a37f5-0861-4390-a577-f042e97136df_1456x137.jpeg 424w, https://substackcdn.com/image/fetch/$s_!zuCh!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc68a37f5-0861-4390-a577-f042e97136df_1456x137.jpeg 848w, https://substackcdn.com/image/fetch/$s_!zuCh!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc68a37f5-0861-4390-a577-f042e97136df_1456x137.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!zuCh!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc68a37f5-0861-4390-a577-f042e97136df_1456x137.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><h4><strong>AI Policy Corner: Reviewing Ukraine&#8217;s Whitepaper on Artificial Intelligence Regulation</strong></h4><p>This article is part of our <a href="https://montrealethics.ai/category/insights/ai-policy-corner/">AI Policy Corner</a> series, a collaboration between the Montreal AI Ethics Institute (MAIEI) and the <a href="https://cla.purdue.edu/academic/polsci/research/labs/grail/index.html">Governance and Responsible AI Lab (GRAIL)</a> at Purdue University. The series provides concise insights into critical AI policy developments from the local to international levels, helping our readers stay informed about the evolving landscape of AI governance. This piece analyzes the two stage process to AI regulation and development outlined in Ukraine&#8217;s <em><a href="https://storage.thedigital.gov.ua/files/c/fc/36c4cae89deedfbf3781ec6bceddffcc.pdf">White Paper on Artificial Intelligence Regulation</a></em> (Version for Consultation), explaining how these two stages connect into the nation&#8217;s broader geopolitical context.</p><p>The whitepaper is intended to inform policies for an environment in which Ukraine&#8217;s goals of business competitiveness, human rights protection, and European integration are supported while protecting its defense AI sector from regulation. A bottom-up approach is proposed that encompasses two stages; the first being a preparatory stage that allows for industry and state planning, followed by a second stage that introduces binding statutes aiming to gradually replicate the EU&#8217;s Artificial Intelligence Regulation Act.</p><p><em>To dive deeper, <strong><a href="https://montrealethics.ai/ai-policy-corner-reviewing-ukraines-whitepaper-on-artificial-intelligence-regulation/">read the full article here</a></strong><a href="https://montrealethics.ai/ai-policy-corner-reviewing-ukraines-whitepaper-on-artificial-intelligence-regulation/">.</a></em></p><div><hr></div><h3><strong>&#10084;&#65039; Support Our Work</strong></h3><div><hr></div><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!HbFT!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcde391e3-3c56-4e53-9b34-07cc9b2a1aad_1456x213.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!HbFT!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcde391e3-3c56-4e53-9b34-07cc9b2a1aad_1456x213.jpeg 424w, https://substackcdn.com/image/fetch/$s_!HbFT!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcde391e3-3c56-4e53-9b34-07cc9b2a1aad_1456x213.jpeg 848w, https://substackcdn.com/image/fetch/$s_!HbFT!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcde391e3-3c56-4e53-9b34-07cc9b2a1aad_1456x213.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!HbFT!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcde391e3-3c56-4e53-9b34-07cc9b2a1aad_1456x213.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!HbFT!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcde391e3-3c56-4e53-9b34-07cc9b2a1aad_1456x213.jpeg" width="1456" height="213" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/cde391e3-3c56-4e53-9b34-07cc9b2a1aad_1456x213.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:213,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!HbFT!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcde391e3-3c56-4e53-9b34-07cc9b2a1aad_1456x213.jpeg 424w, https://substackcdn.com/image/fetch/$s_!HbFT!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcde391e3-3c56-4e53-9b34-07cc9b2a1aad_1456x213.jpeg 848w, https://substackcdn.com/image/fetch/$s_!HbFT!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcde391e3-3c56-4e53-9b34-07cc9b2a1aad_1456x213.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!HbFT!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcde391e3-3c56-4e53-9b34-07cc9b2a1aad_1456x213.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p>Help us keep <em><strong>The AI Ethics Brief</strong></em> free and accessible for everyone by becoming a paid subscriber on Substack or making a donation at<a href="https://montrealethics.ai/donate"> </a><strong><a href="https://montrealethics.ai/donate">montrealethics.ai/donate</a>. </strong>Your support sustains our mission of democratizing AI ethics literacy and honours<a href="https://montrealethics.ai/open-letter-moving-forward-together/"> </a><strong><a href="https://montrealethics.ai/open-letter-moving-forward-together/">Abhishek Gupta&#8217;s</a></strong> legacy.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://brief.montrealethics.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://brief.montrealethics.ai/subscribe?"><span>Subscribe now</span></a></p><p>For corporate partnerships or larger contributions, please contact us at <strong>support@montrealethics.ai</strong></p><div><hr></div><h3><strong>&#9989; Take Action:</strong></h3><div><hr></div><p>Have an article, research paper, or news item we should feature? <strong>Leave us a comment below</strong> &#8212; we&#8217;d love to hear from you!</p>]]></content:encoded></item><item><title><![CDATA[The AI Ethics Brief #177: Community-led AI Governance]]></title><description><![CDATA[The SAIER, performative AI governance, and AI lab transparency]]></description><link>https://brief.montrealethics.ai/p/the-ai-ethics-brief-177-community</link><guid isPermaLink="false">https://brief.montrealethics.ai/p/the-ai-ethics-brief-177-community</guid><dc:creator><![CDATA[Montreal AI Ethics Institute]]></dc:creator><pubDate>Tue, 11 Nov 2025 15:01:49 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!WYCy!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F623b80a3-f28f-4e57-8653-f1fecf8c6aa3_1280x720.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>Welcome to <strong>The AI Ethics Brief</strong>, a bi-weekly publication by the Montreal AI Ethics Institute. We publish <strong>every other Tuesday at 10 AM ET.</strong> Follow MAIEI on <a href="https://bsky.app/profile/mtlaiethics.bsky.social">Bluesky</a> and <a href="https://www.linkedin.com/company/mtlaiethics">LinkedIn.</a> </em></p><p>&#10084;&#65039; <a href="https://montrealethics.ai/donate/">Support Our Work.</a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://brief.montrealethics.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://brief.montrealethics.ai/subscribe?"><span>Subscribe now</span></a></p><div><hr></div><h3><strong>&#128204; Editor&#8217;s Note</strong></h3><div><hr></div><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!WYCy!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F623b80a3-f28f-4e57-8653-f1fecf8c6aa3_1280x720.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!WYCy!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F623b80a3-f28f-4e57-8653-f1fecf8c6aa3_1280x720.png 424w, https://substackcdn.com/image/fetch/$s_!WYCy!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F623b80a3-f28f-4e57-8653-f1fecf8c6aa3_1280x720.png 848w, https://substackcdn.com/image/fetch/$s_!WYCy!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F623b80a3-f28f-4e57-8653-f1fecf8c6aa3_1280x720.png 1272w, https://substackcdn.com/image/fetch/$s_!WYCy!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F623b80a3-f28f-4e57-8653-f1fecf8c6aa3_1280x720.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!WYCy!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F623b80a3-f28f-4e57-8653-f1fecf8c6aa3_1280x720.png" width="1280" height="720" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/623b80a3-f28f-4e57-8653-f1fecf8c6aa3_1280x720.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:720,&quot;width&quot;:1280,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1628905,&quot;alt&quot;:&quot;A vintage photograph of two donkeys hitched to a wooden cart feeding in a brick street with a collage of technology as their load.&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://brief.montrealethics.ai/i/178487843?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F623b80a3-f28f-4e57-8653-f1fecf8c6aa3_1280x720.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="A vintage photograph of two donkeys hitched to a wooden cart feeding in a brick street with a collage of technology as their load." title="A vintage photograph of two donkeys hitched to a wooden cart feeding in a brick street with a collage of technology as their load." srcset="https://substackcdn.com/image/fetch/$s_!WYCy!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F623b80a3-f28f-4e57-8653-f1fecf8c6aa3_1280x720.png 424w, https://substackcdn.com/image/fetch/$s_!WYCy!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F623b80a3-f28f-4e57-8653-f1fecf8c6aa3_1280x720.png 848w, https://substackcdn.com/image/fetch/$s_!WYCy!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F623b80a3-f28f-4e57-8653-f1fecf8c6aa3_1280x720.png 1272w, https://substackcdn.com/image/fetch/$s_!WYCy!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F623b80a3-f28f-4e57-8653-f1fecf8c6aa3_1280x720.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h6>                     Suraj Rai &amp; Digit / <a href="https://betterimagesofai.org">Better Images of AI</a> / https://creativecommons.org/licenses/by/4.0/</h6><h4><strong>In this Edition (TL;DR)</strong></h4><ul><li><p><em><strong>SAIER</strong></em><strong> is back:</strong> After a three-year pause, our flagship <em><a href="https://montrealethics.ai/saier-vol-7-landing/">State of AI Ethics Report (SAIER) </a></em>returned to address the gap between AI ethics principles and actual practice with emphasis on community-led action. SAIER Volume 7 was published November 4, 2025, and includes 58 international contributors across 17 chapters, 48 essays, and 5 critical themes.</p></li><li><p><strong>Foundational concepts and AI governance:</strong> We dig into <em>Part I</em> of <em>SAIER Volume 7</em>, which examines institutional interpretations and implementations of foundational concepts, including national responses to policies across the US and China, organizational AI governance mechanisms, and African initiatives for ethical capacity building.</p></li><li><p><strong>OpenAI and Anthropic&#8217;s approaches to transparency:</strong> as examined in our <em><a href="https://montrealethics.ai/category/insights/ai-policy-corner/">AI Policy Corner</a></em> with GRAIL at Purdue University, both companies place a strong emphasis on transparency within their AI governance frameworks. However, the two companies diverge in framing and method in their quest for their products to be deemed safe.</p></li></ul><p><strong>What connects these stories: </strong>There is an ongoing retreat from <em>ethics</em> to <em>compliance</em>. Some international, national and regional AI policies are becoming watered down, succumbing to the interests of Big Tech, with ethical practice often seen as the tradeoff for faster innovation. The same is true of the organizational implementation of AI governance mechanisms, preferring voluntary codes with little oversight to meaningful forms of public engagement and reflection.</p><p>Facing this trend, movements of resistance are emerging, with some seeking to emphasize the importance of ethics as a practice, as opposed to a continuous, rampant focus on building and releasing the latest in tech. As is the ethos of our SAIER, meaningful community-led attitudes towards the challenges different forms of AI present is not a &#8220;nice-to-have&#8221;; it is necessary. Without this approach, vulnerable communities are ignored, with their interests squashed by the few who shout loudest in the name of innovation.</p><p>There is no pre-determined outcome and path for AI development, nor is there only one unique methodological approach. Approaches differ and desired outcomes diverge, but intentional public dialogue and discussion prove more than deserving of being at the top of the methodological agenda of those developing AI systems.</p><div><hr></div><h3><strong>&#127881; </strong><em><strong>SAIER </strong></em><strong>is back</strong></h3><div><hr></div><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!bBvf!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!bBvf!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 424w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 848w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1272w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp" width="1456" height="137" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:137,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1704,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:&quot;&quot;,&quot;type&quot;:&quot;image/webp&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://brief.montrealethics.ai/i/159128290?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!bBvf!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 424w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 848w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1272w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p><strong>On November 4th, 2025, we launched </strong><em><strong>SAIER Volume 7</strong></em><strong>, a snapshot of the global AI ethics arena.</strong></p><p>This <em>Brief #177</em> is the first in a special series, each tackling a specific part of <em>SAIER Volume 7</em>. This series is to both celebrate the wonderful work of the report&#8217;s contributors, and to demonstrate how the report may be used to better navigate the changing AI ethics landscape.</p><p><em>Part I </em>of <em>SAIER Volume 7 </em>has three chapters: on (i) global AI governance, (ii) various terms often intermingled when discussing AI ethics, and (iii) the organizational implementation of AI governance mechanisms, respectively. These aim to cover the patterns we are currently seeing in the AI governance space, providing viewpoints from a Canadian, Singaporean and an African perspective on how AI governance practices are evolving, and what this means for the space as a whole.</p><p>With perspectives from Canada, the US, Europe, Asia, and Africa, we acknowledge that not all voices could be captured. However, the ethos of <em>SAIER Volume 7</em> is to present a grounded outlook on the current state of play in the world of AI ethics, which we hope proves useful to you in your daily work, helping you to navigate the dense and often obscure world of AI ethics.<br></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://montrealethics.ai/saier-vol-7-landing/&quot;,&quot;text&quot;:&quot;Read the SAIER here&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://montrealethics.ai/saier-vol-7-landing/"><span>Read the SAIER here</span></a></p><div><hr></div><h3><strong>&#128680; Recent Developments on AI Ethics Foundations and Governance</strong></h3><div><hr></div><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!bBvf!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!bBvf!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 424w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 848w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1272w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp" width="1456" height="137" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:137,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1704,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:&quot;&quot;,&quot;type&quot;:&quot;image/webp&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://brief.montrealethics.ai/i/159128290?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!bBvf!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 424w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 848w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1272w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p><strong>Our first chapter in the SAIER highlights the current AI governance landscape, titled </strong><em><strong>Global AI Governance at the Crossroads</strong></em><strong>.</strong> </p><p><strong>Jimmy Y. Huang</strong> (MAIEI and McGill University) and <strong>Renjie Butalid</strong> (Co-founder, MAIEI) provide a bird&#8217;s-eye view of the ever-changing global AI policy landscape.<strong> </strong>Three major blocs are put on the scene: those with the &#8220;superpower resources&#8221; to drive much of the conversation (China and the US), those seeking greater &#8220;AI sovereignty&#8221; through cooperation (African Union, Association of Southeast Asian Nations, Council of Europe and Gulf Cooperation Council), and &#8220;middle powers&#8221; trying to scale their own AI champions while protecting their national interests (e.g.: Canada, Australia, South Korea, and Singapore). The essay by <strong>Wan Sie Lee</strong> (Infocomm Media Development Authority of Singapore) in <em>Chapter 2</em> further explains the changing nature of this picture, with the narrowing of &#8220;AI safety&#8221; at a policy level becoming clear during the Parisian &#8220;AI Action Summit&#8221; earlier this year.</p><p>Chapter 2 on <em>Disentangling AI Safety, AI Alignment and AI Ethics</em> further contextualizes AI governance developments. <strong>Ren&#233;e Sieber</strong> (McGill University) distills some of the concepts, outlining various notions of &#8220;AI Safety&#8221; (which is increasingly institutionalized, as the <a href="https://digital-strategy.ec.europa.eu/en/news/first-meeting-international-network-ai-safety-institutes">&#8220;International Network of AI Safety Institutes&#8221;</a> demonstrates), &#8220;AI alignment&#8221; and &#8220;AI ethics.&#8221; From Sieber&#8217;s analysis, we can highlight a &#8220;retreat&#8221; identified in the UK&#8217;s AI Safety Institute, which became the AI Security Institute early this year. To the above concepts, <strong>Fabio Tollon</strong> (University of Edinburgh) adds that of &#8220;responsible AI,&#8221; a contested term that may be either a research agenda, a governance model, an AI product or some ecosystem of stakeholders, practices and interests.</p><p>This brings us onto chapter 3, where <strong>Ismael Kherroubi Garcia</strong> (Kairoi &amp; MAIEI), <strong>Joahna Kuiper</strong> (HiirAI) and <strong>Shi Kang&#8217;ethe</strong> (AIVERSE) write on <em>Implementing AI Ethics in Organizations</em>. While Tollon situated much of &#8220;responsible AI&#8221; in industry, Kherroubi Garcia draws on <a href="https://doi.org/10.5281/zenodo.15195686">Tollon&#8217;s recent work</a> to re-emphasize the &#8220;retreat&#8221; Sieber had hinted at; this time, not at a national level but an organizational one, as there is an ongoing shift from &#8220;responsible AI&#8221; to compliance. This is consistent with Kuiper&#8217;s analysis, which highlights the pressure organizations face to monetize AI products and respond to legal frameworks; most notably, the EU&#8217;s AI Act.</p><p>While industry in the West follows its usual standards, Kang&#8217;ethe provides some hope for the responsible AI ecosystem from Africa. Kang&#8217;ethe describes AI bootcamps, leadership programs, community-informed research, webinars and hackathons that place dialogue and participants&#8217; reflections at the centre. The goal of such initiatives is not to develop the capacity to build AI tools but to critically evaluate them. Despite following a different approach, further policy fragmentation looms as African nations strive to lead the way, including <a href="https://gaa.go.ke/index.php/kenya-unveils-bold-plan-become-africas-code-nation-software-and-ai-summit">Kenya</a>, <a href="https://iafrica.com/zambia-confronts-the-digital-scramble-for-africa-in-landmark-ai-and-biodiversity-forum/">Zambia</a> and <a href="https://www.moroccoworldnews.com/2025/11/266786/morocco-continues-to-guide-africas-digital-ai-transformation/">Morocco</a>.</p><h5><strong>&#128204; Our Position</strong></h5><p>October 31st saw the end of Canada&#8217;s &#8220;<a href="https://ised-isde.canada.ca/site/ised/en/public-consultations/help-define-next-chapter-canadas-ai-leadership">30-day national sprint to shape a renewed AI strategy</a>.&#8221; The sprint was announced in September alongside an AI Strategy Task Force, which we criticized for being too industry-focused in <em><a href="https://aiethics.substack.com/p/the-ai-ethics-brief-175-when-consultation">Brief #175</a></em>. But we can now build on some of the responses that have been made openly accessible to further this perspective.</p><p>Drawing on various responses to the consultation (with over 120 signatories from civil society, human rights and civil liberty organizations), the <a href="https://bccla.org/policy-submission/open-letter-to-the-minister-of-artificial-intelligence-and-digital-innovation-from-civil-society-organizations-and-individuals-opposing-national-sprint-consultation-on-ai-strategy/">open letter</a> coordinated by <a href="https://citizenlab.ca/2025/11/citizen-lab-researchers-sign-open-letter-on-canadas-ai-strategy/">Citizen Lab researchers and director Ron Deibert</a> condemns the biased consultation process involved in the sprint, serving to safeguard the interests of industry while limiting meaningful public engagement through its short window for responses.</p><p><a href="https://docs.google.com/document/d/1lhfYYF6qaJePvyiH7Xc6OSQtVpVzG-k66TCxYLfITtY/edit?tab=t.0">Further critique</a> ensued: 70 organizations (including MAIEI) called for a deeper and more meaningful engagement with minoritized communities; while <a href="https://github.com/KairoiAI/Resources/blob/497fe523bf2ce1672b1506d5b50bb58b43bc6859/Static/20251030_Response.pdf">Kairoi </a>called on Canada to utilize its influence in the West to act more authentically in light of its Indigenous heritage.</p><p>Notwithstanding diverse organizations scrambling to provide meaningful input within the tight timeframe, the 2025 government budget, published November 4th, made very clear that AI would be further adopted across government departments to improve &#8220;<a href="https://budget.canada.ca/2025/report-rapport/anx3-en.html">operational efficiencies</a>.&#8221; Yet, without proper consultation with the departments in question, the implementation of AI will result in being performative and forced. We cannot ignore that the three pillars of the  <a href="https://ised-isde.canada.ca/site/ai-strategy/en">Pan-Canadian AI Strategy</a> (commercialization, standards, and talent and research) established in 2017 formed the core of the consultation. Yet, it seems the recent emphasis has landed on commercialization.</p><p>Consequently, <em><a href="https://aiethics.substack.com/p/the-ai-ethics-brief-175-when-consultation">Brief #175</a></em> was titled &#8220;When Consultation Becomes Theatre.&#8221; The decisions already in place, and the unnecessarily short turnaround reinforce a notion of public consultation regarding AI in Canada being a question of performativity: it&#8217;s not about getting the nation&#8217;s input on some of the most important tech policies of the decade, but appearing to do so.</p><p>The common thread between all these critiques is clear: the Canadian Government&#8217;s consultation attempt became a series of platitudes: performative and devoid of meaningful impact.</p><p>Nevertheless, an attempt to attain more responses from an otherwise excluded group (artists) was instigated by the Toronto branch of ACTRA (Alliance of Canadian Cinema, Television and Radio Artists). ACTRA Toronto put together <a href="https://actratoronto.com/wp-content/uploads/2025/10/AI-Survey-Consultation-Guide-ACTRA-Toronto_final.pdf">suggested responses</a> for their beneficiaries to remind the government of the importance of consent, compensation, and control when it comes to AI in the arts. This form of consultation is what the sprint lacked: on-the-ground input from members of the public facing genuine threats to the core of their work.</p><p>It is this kind of analysis that will better serve the general public, with its absence spelling the start of a vacuous and misaligned attempt at implementation that leads to those affected most being ignored.</p><p><strong>Did we miss anything? Let us know in the comments below.</strong></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://brief.montrealethics.ai/p/the-ai-ethics-brief-177-community/comments&quot;,&quot;text&quot;:&quot;Leave a comment&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://brief.montrealethics.ai/p/the-ai-ethics-brief-177-community/comments"><span>Leave a comment</span></a></p><div><hr></div><h3><strong>&#128173; Insights &amp; Perspectives:</strong></h3><div><hr></div><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!bBvf!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!bBvf!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 424w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 848w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1272w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp" width="1456" height="137" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:137,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1704,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:&quot;&quot;,&quot;type&quot;:&quot;image/webp&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://brief.montrealethics.ai/i/159128290?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!bBvf!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 424w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 848w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1272w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><h4><strong>AI Policy Corner: Transparency in AI Lab Governance: Comparing OpenAI and Anthropic&#8217;s Approaches</strong></h4><p>This article is part of our <a href="https://montrealethics.ai/category/insights/ai-policy-corner/">AI Policy Corner</a> series, a collaboration between the Montreal AI Ethics Institute (MAIEI) and the <a href="https://cla.purdue.edu/academic/polsci/research/labs/grail/index.html">Governance and Responsible AI Lab (GRAIL)</a> at Purdue University, which compares OpenAI&#8217;s <a href="https://cdn.openai.com/pdf/18a02b5d-6b67-4cec-ab64-68cdfbddebcd/preparedness-framework-v2.pdf">Preparedness Framework</a> (Version 2. Last updated: 15th April, 2025) and Anthropic&#8217;s <a href="https://www-cdn.anthropic.com/872c653b2d0501d6ab44cf87f43e1dc4853e4d37.pdf">Responsible Scaling Policy</a> (Version 2.2. Effective May 14, 2025), highlighting how each lab approaches transparency as part of their governance processes, and how this goes on to impact the level of accountability the company is willing to commit to.</p><p>The difference in approaches between the two companies demonstrates how value-laden the AI space truly is, raising critical questions about the ambitions and strategy behind the companies&#8217; commitments to transparency. As we have seen with the Canadian Government&#8217;s performative sprint, the article highlights how it remains to be seen how formalised the external dialogue that both companies promise becomes and, above all, how genuine an impact this dialogue will have on the decisions they make.</p><p><em>To dive deeper, <strong><a href="https://montrealethics.ai/ai-policy-corner-transparency-in-ai-lab-governance-comparing-openai-and-anthropics-approaches/">read the full article here</a></strong><a href="https://montrealethics.ai/ai-policy-corner-transparency-in-ai-lab-governance-comparing-openai-and-anthropics-approaches/">.</a></em></p><div><hr></div><h3><strong>&#10084;&#65039; Support Our Work</strong></h3><div><hr></div><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!bzIU!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!bzIU!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png 424w, https://substackcdn.com/image/fetch/$s_!bzIU!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png 848w, https://substackcdn.com/image/fetch/$s_!bzIU!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png 1272w, https://substackcdn.com/image/fetch/$s_!bzIU!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!bzIU!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png" width="1456" height="213" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:213,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://brief.montrealethics.ai/i/156386126?img=https%3A%2F%2Fsubstackcdn.com%2Fimage%2Ffetch%2Ff_auto%2Cq_auto%3Agood%2Cfl_progressive%3Asteep%2Fhttps%253A%252F%252Fsubstack-post-media.s3.amazonaws.com%252Fpublic%252Fimages%252F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!bzIU!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png 424w, https://substackcdn.com/image/fetch/$s_!bzIU!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png 848w, https://substackcdn.com/image/fetch/$s_!bzIU!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png 1272w, https://substackcdn.com/image/fetch/$s_!bzIU!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p>Help us keep <em><strong>The AI Ethics Brief</strong></em> free and accessible for everyone by becoming a paid subscriber on Substack or making a donation at <strong><a href="https://montrealethics.ai/donate">montrealethics.ai/donate</a>. </strong>Your support sustains our mission of democratizing AI ethics literacy and honours <strong><a href="https://montrealethics.ai/open-letter-moving-forward-together/">Abhishek Gupta&#8217;s</a></strong> legacy.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://brief.montrealethics.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://brief.montrealethics.ai/subscribe?"><span>Subscribe now</span></a></p><p>For corporate partnerships or larger contributions, please contact us at <strong><a href="mailto:support@montrealethics.ai">support@montrealethics.ai</a></strong></p><div><hr></div><h3>&#9989; Take Action:</h3><div><hr></div><p>Have an article, research paper, or news item we should feature? <strong>Leave us a comment below</strong> &#8212; we&#8217;d love to hear from you!</p>]]></content:encoded></item><item><title><![CDATA[Special Edition: State of AI Ethics Report (SAIER) Volume 7]]></title><description><![CDATA[AI at the Crossroads: A Practitioner's Guide To Community-Centered Solutions]]></description><link>https://brief.montrealethics.ai/p/special-edition-state-of-ai-ethics</link><guid isPermaLink="false">https://brief.montrealethics.ai/p/special-edition-state-of-ai-ethics</guid><dc:creator><![CDATA[Montreal AI Ethics Institute]]></dc:creator><pubDate>Tue, 04 Nov 2025 15:03:11 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/11e162ff-7421-4650-a0c5-048c5c3bbce8_2560x1721.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!k3Qe!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F06817b34-c732-4208-8bb2-418d600affd0_1200x560.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!k3Qe!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F06817b34-c732-4208-8bb2-418d600affd0_1200x560.jpeg 424w, https://substackcdn.com/image/fetch/$s_!k3Qe!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F06817b34-c732-4208-8bb2-418d600affd0_1200x560.jpeg 848w, https://substackcdn.com/image/fetch/$s_!k3Qe!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F06817b34-c732-4208-8bb2-418d600affd0_1200x560.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!k3Qe!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F06817b34-c732-4208-8bb2-418d600affd0_1200x560.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!k3Qe!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F06817b34-c732-4208-8bb2-418d600affd0_1200x560.jpeg" width="1200" height="560" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/06817b34-c732-4208-8bb2-418d600affd0_1200x560.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:560,&quot;width&quot;:1200,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:78609,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://brief.montrealethics.ai/i/177773977?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F06817b34-c732-4208-8bb2-418d600affd0_1200x560.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!k3Qe!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F06817b34-c732-4208-8bb2-418d600affd0_1200x560.jpeg 424w, https://substackcdn.com/image/fetch/$s_!k3Qe!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F06817b34-c732-4208-8bb2-418d600affd0_1200x560.jpeg 848w, https://substackcdn.com/image/fetch/$s_!k3Qe!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F06817b34-c732-4208-8bb2-418d600affd0_1200x560.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!k3Qe!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F06817b34-c732-4208-8bb2-418d600affd0_1200x560.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><strong>Banner Image Credit:</strong> Distorted Lake Trees by Lone Thomasky &amp; Bits&amp;B&#228;ume, featured in <a href="https://betterimagesofai.org/images?artist=LoneThomasky&amp;title=DistortedLakeTrees">Better Images of AI</a>, licensed under <a href="https://creativecommons.org/licenses/by/4.0/">CC-BY 4.0.</a></figcaption></figure></div><div><hr></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://montrealethics.ai/state/&quot;,&quot;text&quot;:&quot;Download SAIER Vol. 7&quot;,&quot;action&quot;:null,&quot;class&quot;:&quot;button-wrapper&quot;}" data-component-name="ButtonCreateButton"><a class="button primary button-wrapper" href="https://montrealethics.ai/state/"><span>Download SAIER Vol. 7</span></a></p><div><hr></div><h3><strong>&#128204; Editor&#8217;s Note</strong></h3><p>After months of collaborative work with <strong>58 contributors</strong> from around the world, we&#8217;re sharing this assessment of where AI governance stands in 2025, what&#8217;s working, what&#8217;s failing, and where urgent action is needed.</p><p><strong>Editors:</strong> Renjie Butalid, Connor Wright, Ismael Kherroubi Garc&#237;a</p><p>This volume is dedicated to the memory of <a href="https://montrealethics.ai/in-memoriam-abhishek-gupta-dec-20-1992-sep-30-2024/">Abhishek Gupta</a>, MAIEI co-founder, whose spirit and vision continue to guide our work.</p><p><strong>Access the report:</strong> <a href="https://montrealethics.ai/saier-vol-7-landing/">MAIEI website</a> | <a href="https://zenodo.org/records/17328882">Zenodo</a></p><div><hr></div><h3>Why This Report Matters Now</h3><p>2025 will be remembered as the year AI governance became intensely geopolitical. Within days of each other in July, the <a href="https://brief.montrealethics.ai/i/169449213/when-the-worlds-two-ai-superpowers-unveil-competing-visions-within-days-of-each-other-are-we-witnessing-strategic-coordination-or-dangerous-fragmentation-in-global-ai-governance">United States and China</a> published competing visions for AI&#8217;s future. Meanwhile, the gap between those building AI systems and those impacted by them has grown wider and more structural.</p><p>A recent <a href="https://fortune.com/2025/08/18/mit-report-95-percent-generative-ai-pilots-at-companies-failing-cfo/?mod=article_inline">MIT report found</a> that 95% of generative AI pilots are failing, not because the technology doesn&#8217;t work, but because implementation infrastructure, literacy, and participation mechanisms are broken.</p><p>If 2025 taught us anything, it&#8217;s that AI governance is fundamentally about power: <br>who has it, who controls it, and how communities respond when excluded from decisions that shape their lives.</p><p>SAIER Volume 7 documents this crossroads moment through practitioner perspectives, real-world case studies, and critical analysis across 17 chapters and five major themes.</p><blockquote><p><a href="https://montrealethics.ai/saier-vol-7-contributors/">View the full list of contributors to SAIER Volume 7 here</a></p></blockquote><div><hr></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://montrealethics.ai/state/&quot;,&quot;text&quot;:&quot;Download SAIER Vol. 7&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://montrealethics.ai/state/"><span>Download SAIER Vol. 7</span></a></p><div><hr></div><h3>A Note on What This Report Is</h3><p>This is not another framework or productivity analysis. SAIER Volume 7 is a snapshot from our vantage point: perspectives from practitioners, researchers, advocates, and communities grappling with AI&#8217;s realities on the ground. It captures perspectives from Canada, the US, Europe, Asia, and Africa, though we recognize many voices and regions remain underrepresented.</p><p>This isn&#8217;t <em>the</em> definitive state of AI ethics. It&#8217;s <em>a </em>state of AI ethics, from where we sit, in 2025. We hope it generates more questions than it answers and encourages people to ask hard questions and be critical.</p><p>At the end of the day, AI isn&#8217;t just about machines, models, or infrastructure. It&#8217;s about people. What role do we want humanity to play in shaping a future increasingly mediated by algorithms?</p><div><hr></div><h3>What&#8217;s Inside</h3><h4>Opening Reflections</h4><p>We open with reflections from <strong>Renjie Butalid</strong> (MAIEI Co-Founder), <strong>Marianna Ganapini</strong> (MAIEI Faculty Director), and <strong>Daniel Schiff</strong> (Co-Director of GRAIL, Purdue University), situating this volume at a critical inflection point for AI governance and MAIEI&#8217;s mission to democratize AI ethics discourse.</p><h4><strong>Part I:</strong> <strong>Foundations &amp; Governance</strong></h4><p>Mapping the foundations of AI governance in 2025: competing global policies, conceptual underpinnings, and practical implementation mechanisms.</p><p><strong>Key questions:</strong> What does AI sovereignty mean for middle powers navigating between US and China? How do we disentangle AI safety, alignment, and ethics? What does moving from principles to practice actually require?</p><ul><li><p><strong>Chapter 1:</strong> Global AI Governance at the Crossroads</p></li><li><p><strong>Chapter 2:</strong> Disentangling AI Safety, AI Alignment and AI Ethics</p></li><li><p><strong>Chapter 3:</strong> From Principles to Practice: Implementing AI Ethics in Organizations</p></li></ul><h4><strong>Part II: Social Justice &amp; Equity</strong></h4><p>How AI systems reproduce inequalities in housing, policing, education, elections, and surveillance. This section examines communities resisting these harms, the negative impacts of unchecked technological advances on democratic institutions, their limited effectiveness in promoting justice, their role in surveillance systems, and the environmental costs of AI infrastructures.</p><p><strong>Key insights:</strong> AI-powered disinformation reshaping democratic participation | Why algorithmic justice efforts often fail to challenge state power | The hidden environmental costs of AI infrastructure</p><ul><li><p><strong>Chapters 4:</strong> Democracy and AI Disinformation</p></li><li><p><strong>Chapters 5:</strong> Algorithmic Justice in Practice</p></li><li><p><strong>Chapters 6:</strong> AI Surveillance, Privacy, and Human Rights</p></li><li><p><strong>Chapters 7:</strong> Environmental Impact of AI</p></li></ul><h4><strong>Part III: Sectoral Applications</strong></h4><p>What&#8217;s happening on the ground across healthcare, education, labour, and the arts, including deployments that went wrong and the first collective agreements protecting workers from AI exploitation.</p><p><strong>What you&#8217;ll find:</strong> Lessons from healthcare AI failures | Universities grappling with generative AI | Labour justice in sectors like oil and gas | The first collective agreements protecting Canadian performers</p><ul><li><p><strong>Chapters 8:</strong> Healthcare AI: When Algorithms Meet Patient Care</p></li><li><p><strong>Chapters 9:</strong> AI in Education: Tools, Policies, and Institutional Change</p></li><li><p><strong>Chapters 10:</strong> AI and Labour Justice</p></li><li><p><strong>Chapters 11:</strong> AI in Arts, Culture, and Media</p></li></ul><h4><strong>Part IV: Emerging Technologies</strong></h4><p>Technologies at the frontier: military AI, autonomous weapons, AI agents operating with minimal human oversight, and democratic alternatives to corporate-controlled AI. This section explores the relationship between AI and the military, what the deployment of agentic AI systems means going forward, and how to champion community-driven AI applications.</p><p><strong>Critical analysis:</strong> The new military-industrial complex | What it means when AI agents act with minimal intervention | Community-led approaches and open models as democratic infrastructure</p><ul><li><p><strong>Chapters 12:</strong> Military AI and Autonomous Weapons</p></li><li><p><strong>Chapters 13:</strong> AI Agents and Agentic Systems</p></li><li><p><strong>Chapters 14:</strong> Democratic AI: Community Control and Open Models</p></li></ul><h4>Part V: <strong>Collective Action</strong></h4><p>Stories of collective power: how communities are building AI literacy as civic competence, how nonprofits are organizing, and what governments are learning from public sector implementations. This section paints a grounded and hopeful picture of what collective action looks like in a world increasingly influenced by algorithms.</p><p><strong>Highlights:</strong> Indigenous approaches to AI governance | How civil society organizations are shaping AI despite resource constraints | Lessons from public sector successes and failures</p><ul><li><p><strong>Chapters 15:</strong> AI Literacy: Building Civic Competence for Democratic AI</p></li><li><p><strong>Chapters 16:</strong> Civil Society and AI: Nonprofits, Philanthropy, and Movement Building </p></li><li><p><strong>Chapters 17:</strong> AI in Government: Public Sector Leadership and Implementation</p></li></ul><div><hr></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://montrealethics.ai/state/&quot;,&quot;text&quot;:&quot;Download SAIER Vol. 7&quot;,&quot;action&quot;:null,&quot;class&quot;:&quot;button-wrapper&quot;}" data-component-name="ButtonCreateButton"><a class="button primary button-wrapper" href="https://montrealethics.ai/state/"><span>Download SAIER Vol. 7</span></a></p><div><hr></div><h3>By the Numbers</h3><p>This volume brings together <strong>58 international contributors</strong> across <strong>17 chapters, 48 essays, and 5 critical themes.</strong></p><p><strong>Contributors from:</strong> University of Oxford, University of Cambridge, Infocomm Media Development Authority of Singapore, Governance and Responsible AI Lab at Purdue University, We and AI, Alliance of Canadian Cinema Television and Radio Artists, and more.</p><blockquote><p><a href="https://montrealethics.ai/saier-vol-7-contributors/">View the full list of contributors to SAIER Volume 7 here</a></p></blockquote><div><hr></div><p>Please share your thoughts with the MAIEI community:</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://brief.montrealethics.ai/p/special-edition-state-of-ai-ethics/comments&quot;,&quot;text&quot;:&quot;Leave a comment&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://brief.montrealethics.ai/p/special-edition-state-of-ai-ethics/comments"><span>Leave a comment</span></a></p><div><hr></div><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!7t04!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fac769a36-e5f9-40cc-a9b8-3c0a337d7d89_2000x293.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!7t04!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fac769a36-e5f9-40cc-a9b8-3c0a337d7d89_2000x293.jpeg 424w, https://substackcdn.com/image/fetch/$s_!7t04!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fac769a36-e5f9-40cc-a9b8-3c0a337d7d89_2000x293.jpeg 848w, https://substackcdn.com/image/fetch/$s_!7t04!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fac769a36-e5f9-40cc-a9b8-3c0a337d7d89_2000x293.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!7t04!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fac769a36-e5f9-40cc-a9b8-3c0a337d7d89_2000x293.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!7t04!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fac769a36-e5f9-40cc-a9b8-3c0a337d7d89_2000x293.jpeg" width="1456" height="213" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/ac769a36-e5f9-40cc-a9b8-3c0a337d7d89_2000x293.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:213,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:18754,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://brief.montrealethics.ai/i/177773977?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fac769a36-e5f9-40cc-a9b8-3c0a337d7d89_2000x293.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!7t04!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fac769a36-e5f9-40cc-a9b8-3c0a337d7d89_2000x293.jpeg 424w, https://substackcdn.com/image/fetch/$s_!7t04!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fac769a36-e5f9-40cc-a9b8-3c0a337d7d89_2000x293.jpeg 848w, https://substackcdn.com/image/fetch/$s_!7t04!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fac769a36-e5f9-40cc-a9b8-3c0a337d7d89_2000x293.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!7t04!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fac769a36-e5f9-40cc-a9b8-3c0a337d7d89_2000x293.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p>Please help us keep <em><strong>The AI Ethics Brief</strong></em> free and accessible for everyone by becoming a paid subscriber on Substack or making a donation at <strong><a href="https://montrealethics.ai/donate">montrealethics.ai/donate</a>. </strong>Your support sustains our mission of democratizing AI ethics literacy and honours <strong><a href="https://montrealethics.ai/open-letter-moving-forward-together/">Abhishek Gupta&#8217;s</a></strong> legacy.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://brief.montrealethics.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://brief.montrealethics.ai/subscribe?"><span>Subscribe now</span></a></p><p>For corporate partnerships or larger contributions, please contact us at <strong><a href="mailto:support@montrealethics.ai">support@montrealethics.ai</a></strong></p><p></p>]]></content:encoded></item><item><title><![CDATA[The AI Ethics Brief #176: AI's Material Reality]]></title><description><![CDATA[Data centres, digital art, and the material costs of AI. Plus SAIER Volume 7 returns.]]></description><link>https://brief.montrealethics.ai/p/the-ai-ethics-brief-176-ai-ethics</link><guid isPermaLink="false">https://brief.montrealethics.ai/p/the-ai-ethics-brief-176-ai-ethics</guid><dc:creator><![CDATA[Montreal AI Ethics Institute]]></dc:creator><pubDate>Wed, 29 Oct 2025 01:01:03 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!MkFQ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5016aa8b-d76e-4371-bf82-02090b47c921_3753x5622.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>Welcome to <strong>The AI Ethics Brief</strong>, a bi-weekly publication by the Montreal AI Ethics Institute. We publish <strong>every other Tuesday.</strong> Follow MAIEI on <a href="https://bsky.app/profile/mtlaiethics.bsky.social">Bluesky</a> and <a href="https://www.linkedin.com/company/mtlaiethics">LinkedIn.</a> </em></p><p>&#10084;&#65039; <a href="https://montrealethics.ai/donate/">Support Our Work.</a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://brief.montrealethics.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://brief.montrealethics.ai/subscribe?"><span>Subscribe now</span></a></p><div><hr></div><h3><strong>&#128204; Editor&#8217;s Note</strong></h3><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!MkFQ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5016aa8b-d76e-4371-bf82-02090b47c921_3753x5622.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!MkFQ!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5016aa8b-d76e-4371-bf82-02090b47c921_3753x5622.jpeg 424w, https://substackcdn.com/image/fetch/$s_!MkFQ!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5016aa8b-d76e-4371-bf82-02090b47c921_3753x5622.jpeg 848w, https://substackcdn.com/image/fetch/$s_!MkFQ!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5016aa8b-d76e-4371-bf82-02090b47c921_3753x5622.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!MkFQ!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5016aa8b-d76e-4371-bf82-02090b47c921_3753x5622.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!MkFQ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5016aa8b-d76e-4371-bf82-02090b47c921_3753x5622.jpeg" width="1456" height="972" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/5016aa8b-d76e-4371-bf82-02090b47c921_3753x5622.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:972,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:4240947,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://brief.montrealethics.ai/i/176724719?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5016aa8b-d76e-4371-bf82-02090b47c921_3753x5622.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!MkFQ!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5016aa8b-d76e-4371-bf82-02090b47c921_3753x5622.jpeg 424w, https://substackcdn.com/image/fetch/$s_!MkFQ!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5016aa8b-d76e-4371-bf82-02090b47c921_3753x5622.jpeg 848w, https://substackcdn.com/image/fetch/$s_!MkFQ!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5016aa8b-d76e-4371-bf82-02090b47c921_3753x5622.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!MkFQ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5016aa8b-d76e-4371-bf82-02090b47c921_3753x5622.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><strong>Brief #176 Banner Image Credit: </strong>Photo by <a href="https://unsplash.com/@ryoji__iwata?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Ryoji Iwata</a> on <a href="https://unsplash.com/photos/aerial-photography-of-people-walking-in-the-intersection-street-during-daytime-vWfKaO0k9pc?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Unsplash</a></figcaption></figure></div><div><hr></div><h4><strong>In this Edition (TL;DR)</strong></h4><ul><li><p><strong>SAIER Volume 7 returns:</strong> After a three-year pause, our flagship <em>State of AI Ethics Report (SAIER)</em> returns to address the gap between AI ethics principles and actual practice. Publishes <strong>November 4, 2025</strong> as a Special Edition of The AI Ethics Brief.</p></li><li><p><strong>Data centres at a cost:</strong> AI&#8217;s infrastructure boom is straining power grids and harming local communities. AWS&#8217;s recent outage exposed our growing dependency, while the word &#8220;artificial&#8221; in artificial intelligence becomes increasingly misleading. We need efficiency innovations, not endless expansion.</p></li><li><p><strong>AI and artistic intent:</strong> AI can generate images, but intentionality, not the tool itself, determines whether output becomes art. We need more thoughtful engagement, not more prompts.</p></li></ul><p><strong>What connects these stories:</strong> </p><p>AI&#8217;s infrastructure&#8212;physical, artistic, and ethical&#8212;has real costs, and those costs fall on communities least equipped to bear them. Data centres drain local power and water while Big Tech chases compute. AI image generators flood markets with superficial output. AI ethics produces endless principles while communities wait for action. When AI is treated as ethereal or abstract, we ignore who pays the price. Moving forward requires transparency about environmental impact, genuine community consultation, and thoughtful engagement with AI as a creative tool and not as a replacement for intention. SAIER Volume 7 aims to bridge this gap, offering concrete steps to move from principles to practice.</p><div><hr></div><h3><strong>&#128270; Where We Stand</strong></h3><div><hr></div><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!bBvf!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!bBvf!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 424w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 848w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1272w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp" width="1456" height="137" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:137,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1704,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:&quot;&quot;,&quot;type&quot;:&quot;image/webp&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://brief.montrealethics.ai/i/159128290?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!bBvf!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 424w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 848w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1272w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><h4><strong>Why the State of AI Ethics Report (SAIER) returns now</strong></h4><p>Since ChatGPT&#8217;s release in late 2022, AI ethics has exploded with voices, PR campaigns, and performative commitments. Billions flow into AI infrastructure while the gap between stated principles and actual practice widens daily. More money, more momentum, less accountability. Some say <a href="https://www.wired.com/story/ai-bubble-will-burst/">we&#8217;re dealing with an AI bubble.</a> </p><p>MAIEI published <a href="https://montrealethics.ai/state/">six </a><em><a href="https://montrealethics.ai/state/">State of AI Ethics Reports</a></em><a href="https://montrealethics.ai/state/"> between 2020-2022,</a> offering grounded analysis when the field needed it. We paused to focus on growing <em><a href="https://brief.montrealethics.ai/">The AI Ethics Brief</a>, </em>which now reaches almost 20,000 subscribers bi-weekly.<em> </em>The exponential growth in both AI deployment and AI ethics theatre created an urgent need: cut through the noise, document what&#8217;s actually working (or not), and centre community voices systematically excluded from mainstream governance.</p><p>Following hundreds of conversations and a close review of over 800 pieces published on the <a href="https://montrealethics.ai/">MAIEI website since 2018</a>, one insight stood out: <strong>the field needs connection and interpretation.</strong> There&#8217;s a growing recognition that isolated efforts across sectors contain valuable knowledge that rarely gets shared or built upon, including lessons from quiet failures that never made headlines.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!vRRs!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd86ff78d-0504-4241-88b7-9f70efc56c9f_1028x1332.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!vRRs!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd86ff78d-0504-4241-88b7-9f70efc56c9f_1028x1332.png 424w, https://substackcdn.com/image/fetch/$s_!vRRs!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd86ff78d-0504-4241-88b7-9f70efc56c9f_1028x1332.png 848w, https://substackcdn.com/image/fetch/$s_!vRRs!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd86ff78d-0504-4241-88b7-9f70efc56c9f_1028x1332.png 1272w, https://substackcdn.com/image/fetch/$s_!vRRs!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd86ff78d-0504-4241-88b7-9f70efc56c9f_1028x1332.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!vRRs!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd86ff78d-0504-4241-88b7-9f70efc56c9f_1028x1332.png" width="487" height="631.0155642023346" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/d86ff78d-0504-4241-88b7-9f70efc56c9f_1028x1332.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1332,&quot;width&quot;:1028,&quot;resizeWidth&quot;:487,&quot;bytes&quot;:1572840,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://brief.montrealethics.ai/i/176724719?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd86ff78d-0504-4241-88b7-9f70efc56c9f_1028x1332.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!vRRs!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd86ff78d-0504-4241-88b7-9f70efc56c9f_1028x1332.png 424w, https://substackcdn.com/image/fetch/$s_!vRRs!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd86ff78d-0504-4241-88b7-9f70efc56c9f_1028x1332.png 848w, https://substackcdn.com/image/fetch/$s_!vRRs!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd86ff78d-0504-4241-88b7-9f70efc56c9f_1028x1332.png 1272w, https://substackcdn.com/image/fetch/$s_!vRRs!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd86ff78d-0504-4241-88b7-9f70efc56c9f_1028x1332.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">State of AI Ethics Report (SAIER) Volume 7 publishes <strong>November 4, 2025</strong> as a Special Edition of The AI Ethics Brief.</figcaption></figure></div><p>These themes came into sharp focus last week during <a href="https://www.linkedin.com/posts/icn_rcc_icnengage-ai-internationalcooperation-activity-7384981584715845632-yr5y">a panel discussion hosted by the Inter-Council Network,</a> a coalition representing 400+ Canadian civil society organizations. The topic: &#8220;Justice, Resistance, and Co-Creating the Future,&#8221; as part of a webinar series on &#8220;Global Solidarity in the Age of AI: Risks, Responsibilities, and Opportunities for Civil Society.&#8221; </p><p>The questions we wrestled with in our discussion are the same ones driving SAIER Volume 7:</p><ul><li><p><strong>How do we move from consultation theatre to genuine co-creation?</strong> Governments invite civil society to 30-day consultation sprints while industry lobbyists shape policy year-round (see <a href="https://brief.montrealethics.ai/i/174936793/canadas-ai-strategy-task-force-misses-the-mark-on-inclusion">Brief #175</a> for our critique on Canada&#8217;s <em>AI Strategy Task Force</em> missing the mark on inclusion).</p></li><li><p><strong>Who has expertise, and who decides?</strong> Communities living with AI&#8217;s harms possess knowledge that industry and academia systematically devalue. Indigenous data sovereignty frameworks, worker-led auditing models, municipal AI governance experiments exist and work, yet remain excluded from mainstream policy and governance conversations. </p><ul><li><p>MAIEI is supporting the <a href="https://win.newmode.net/civilsociety/ourvoicesourfuture">Canada&#8217;s AI&#8212;Our Voices, Our Future campaign,</a> which calls for a permanent Civil Society &amp; Communities Council, meaningful community engagement, and a Public Benefit &amp; Equity Test for all AI recommendations.</p></li></ul></li><li><p><strong>What does global solidarity actually require?</strong> Building on conversations like these webinars to create sustained relationship-building, resource sharing across borders, coordinated resistance to vendor lock-in, and centering Global South leadership in the broader movement for just AI governance.</p></li></ul><p>These aren&#8217;t just policy questions, they reflect fundamental capacity gaps. Canada currently ranks <a href="https://brief.montrealethics.ai/i/167704348/study-shows-canada-among-least-ai-literate-nations-canada-th-out-of-countries-in-ai-literacy">44th out of 47 countries in AI literacy</a>, according to a recent KPMG-University of Melbourne study. This literacy gap limits who can meaningfully participate in AI governance processes at all.</p><p><strong>SAIER Volume 7 is built on a simple premise: responsible AI has always been as much about capacity as it is about commitment.</strong> The gap between theoretical principles and practical implementations rarely reflects a lack of intent, but rather, missing infrastructure, institutional inertia, unclear mandates, or poorly designed incentives that collectively contribute to this challenge. The hard work often falls to those without formal authority, including local organizers, frontline workers, junior engineers, and researchers who work across silos.</p><p><strong>SAIER Volume 7: AI at the Crossroads features:</strong></p><ul><li><p><strong>17 distinct topics, 40+ external contributors</strong> from different disciplines, geographies, and lived experiences</p></li><li><p><strong>Practitioners-first focus:</strong> What policies actually work? What interventions show measurable impact? Where are the gaps between framework and implementation?</p></li><li><p><strong>Community-centered solutions:</strong> Moving beyond principles and declarations to concrete models of co-creation, participatory governance, and democratic AI oversight</p></li></ul><p>AI is at an ethical crossroads where every dollar invested makes the decision to go another way more irreversible. The infrastructure gets built, the dependencies deepen, the power concentrates. Our report offers a different path, grounded in what communities are already building when given resources and genuine partnership.</p><p>The governance frameworks being written now will shape decades of technological development. If we want justice, equity, and solidarity embedded in these systems, we need to be in the rooms where they&#8217;re being written as co-creators.</p><p><strong>SAIER Volume 7 publishes November 4, 2025 as a Special Edition of The AI Ethics Brief.</strong></p><div><hr></div><p>Please share your thoughts with the MAIEI community:</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://brief.montrealethics.ai/p/the-ai-ethics-brief-176-ai-ethics/comments&quot;,&quot;text&quot;:&quot;Leave a comment&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://brief.montrealethics.ai/p/the-ai-ethics-brief-176-ai-ethics/comments"><span>Leave a comment</span></a></p><div><hr></div><h3><strong>&#128680; Recent Developments: What We&#8217;re Tracking</strong></h3><div><hr></div><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!bBvf!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!bBvf!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 424w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 848w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1272w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp" width="1456" height="137" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:137,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1704,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:&quot;&quot;,&quot;type&quot;:&quot;image/webp&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://brief.montrealethics.ai/i/159128290?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!bBvf!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 424w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 848w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1272w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><h4><strong>Data centre capacity at the cost of local communities </strong></h4><p>The AI boom is creating an infrastructure crisis. A record <a href="https://www.cnbc.com/2025/10/15/nvidia-microsoft-blackrock-aligned-data-centers.html">$40-billion acquisition deal for Aligned Data Centres</a> by a consortium led by Nvidia, Microsoft, BlackRock, xAI, MGX of Abu Dhabi, Kuwait Investment Authority, and Temasek signals unprecedented data centre expansion. Investment bank UBS projects <a href="https://www.nytimes.com/2025/10/20/technology/ai-data-center-backlash-mexico-ireland.html?mod=djemCIO">companies will spend $375 billion on data centres globally this year and $500 billion in 2026.</a> President Trump&#8217;s <a href="https://www.whitehouse.gov/presidential-actions/2025/07/accelerating-federal-permitting-of-data-center-infrastructure/">July Executive Order</a> fast-tracked this growth by opening federally-owned land for data centre construction.</p><p>The infrastructure cost is falling on local communities. Ireland data centres now consume <a href="https://www.nytimes.com/2025/10/20/technology/ai-data-center-backlash-mexico-ireland.html?mod=djemCIO">20% of the country&#8217;s electricity,</a> expected to rise to 33% within the next five years. The consequences of this rapid expansion has forced Ireland to limit new construction in Dublin due to &#8220;<a href="https://www.nytimes.com/2025/10/20/technology/ai-data-center-backlash-mexico-ireland.html?mod=djemCIO">significant risk</a>&#8221; to power supplies, despite already hosting around 120 data centres in the area. In Mexico, residents of Las Cenizas experienced increased water and electricity outages after a Microsoft data centre was built nearby. When a hepatitis outbreak hit over the summer, <a href="https://www.nytimes.com/2025/10/20/technology/ai-data-center-backlash-mexico-ireland.html?mod=djemCIO">residents lacked water to wash their hands.</a> Mexico&#8217;s national power company blamed stray animals and lightning strikes, but the timing was clear to those living there.</p><h5><strong>&#128204; Our Position</strong></h5><p><strong>AI&#8217;s physical infrastructure is being built at the expense of the communities forced to host it. </strong>As <a href="https://www.theguardian.com/technology/2021/jun/06/microsofts-kate-crawford-ai-is-neither-artificial-nor-intelligent">Kate Crawford argued,</a> AI is &#8220;neither artificial nor intelligent.&#8221; As her work with Vladan Joler demonstrates in <a href="https://anatomyof.ai/">Anatomy of an AI System</a>, AI requires massive raw materials, human labour, electricity, and water. Our late co-founder Abhishek Gupta&#8217;s &#8220;<a href="https://thegradient.pub/sustainable-ai/">The Imperative for Sustainable AI Systems</a>,&#8221; also laid out this carbon and resource cost in 2021, yet Big Tech&#8217;s compute demands continue to escalate (see: <a href="https://openai.com/index/five-new-stargate-sites/">OpenAI&#8217;s Stargate project</a>). When <a href="https://techcrunch.com/2025/10/21/amazon-dns-outage-breaks-much-of-the-internet/">AWS went down recently</a>, large swathes of the web were unusable for hours, including apps like Perplexity, Zoom, and Fortnite alongside websites, banks, and government services. The expansion of data centres will continue to strain national power grids and harm the daily lives of local communities.</p><p><strong>The path forward isn&#8217;t more data centres; it&#8217;s smarter AI. </strong><a href="https://brief.montrealethics.ai/i/156012698/deepseeks-r-model-shakes-up-the-ai-industry">Deepseek&#8217;s R1 model marked a step forward</a> earlier this year (see <a href="https://brief.montrealethics.ai/p/the-ai-ethics-brief-157-deepseek?open=false#%C2%A7deepseeks-r-model-shakes-up-the-ai-industry">Brief #157</a>). The Chinese model has now revealed another efficiency gain: converting text and documents using up to <a href="https://www.tomshardware.com/tech-industry/artificial-intelligence/new-deepseek-model-drastically-reduces-resource-usage-by-converting-text-and-documents-into-images-vision-text-compression-uses-up-to-20-times-fewer-tokens">20x fewer tokens</a> and dramatically reducing resource consumption. While these are early claims that require rigorous, third-party benchmarks to verify given the variability in tokenization schemes and real-world deployment contexts, the direction is promising. Efficiency gains like these deserve far more attention than plans to build more data centres across North America and beyond (such as in <a href="https://www.reuters.com/markets/emerging/malaysia-establish-data-centre-framework-streamline-policies-2025-07-22/">Malaysia</a>). Until the industry prioritizes efficiency over expansion, local communities will continue paying the price for AI&#8217;s growth.</p><div><hr></div><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!bBvf!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!bBvf!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 424w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 848w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1272w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp" width="1456" height="137" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:137,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1704,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:&quot;&quot;,&quot;type&quot;:&quot;image/webp&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://brief.montrealethics.ai/i/159128290?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!bBvf!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 424w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 848w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1272w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><h4><strong>AI and Art: A new medium, or an unhelpful detractor?</strong></h4><p>In <a href="https://brief.montrealethics.ai/i/174936793/ai-and-copyright-from-infringements-to-deepfakes">Brief #175</a>, we covered generative AI&#8217;s impact on copyright. Now, a related debate is intensifying: <em><strong>Is AI image generation a creative tool or a threat to artistic practice?</strong></em></p><p>Some artists like <a href="https://www.breaartgallery.com/kira-xonorika">Kira Xonorika</a> appreciate the unpredictability of AI image generation, which helps expand the boundaries of artistic expression. Others, like <a href="https://www.technologyreview.com/2025/10/17/1125193/ai-art-artist-new-chapter/">Beth Frey</a>, initially enjoyed AI&#8217;s struggles with elements like hands, as these limitations provided nuance to image creation. She now feels that nuance is lost as the technology improves. The ease of access has also flooded art spaces with similar-looking AI-generated pieces, making it harder to distinguish thoughtful work from algorithmic output.</p><p>This homogeneity extends beyond artistic circles. Projects like <a href="https://betterimagesofai.org/">Better Images of AI</a> have documented how AI imagery, whether generated by AI or used to represent AI, has become repetitive and misleading. Search &#8220;AI images&#8221; and you&#8217;ll find the same sci-fi robots, glowing blue brains, and anthropomorphized figures. Better Images of AI argues these representations misrepresent the technology, reinforce harmful stereotypes, and limit public understanding of what AI systems actually are and do. The proliferation of AI image generators has, ironically, made this problem worse: the same generic aesthetics now flood photo libraries and content platforms, creating a self-referential cycle that obscures rather than illuminates.</p><h5><strong>&#128204; Our Position</strong></h5><p>The tool doesn&#8217;t determine whether something is art, the artist&#8217;s intent and process does. Pencils, paintbrushes, and canvas have always been accessible, yet not every drawing is art. What elevates an image to art is the story behind its creation, the emotions it evokes, the message it conveys, and the artist&#8217;s intentional choices.</p><p>AI can create aesthetically interesting images. What elevates that output to art is whether the artist has engaged meaningfully with the medium, whatever that medium may be. Writing five words into a prompt and calling the output art is not the same as an artist who deeply considers how AI serves their vision, reflects on its implications for their practice, and makes deliberate choices about when and how to use it.</p><p>AI is a tool. Like photography before it, it democratizes image-making. This accessibility means more people can create images, but it also means the market fills with superficial work (Note: on a related topic, we address AI&#8217;s role in <em>misinformation</em> and <em>disinformation</em> in <a href="https://brief.montrealethics.ai/i/166771550/the-dissemination-of-disinformation-in-the-israel-iran-conflict">Brief #168</a>). Intentionality is what matters. Artists who thoughtfully consider how AI serves their vision create different work than those who simply let the algorithm do the thinking. </p><p>As Better Images of AI demonstrates, we need more intentional, diverse, and accurate visual representations of AI, whether those images are created by humans, AI, or a thoughtful combination of both.</p><div><hr></div><p>Please share your thoughts with the MAIEI community:</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://brief.montrealethics.ai/p/the-ai-ethics-brief-176-ai-ethics/comments&quot;,&quot;text&quot;:&quot;Leave a comment&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://brief.montrealethics.ai/p/the-ai-ethics-brief-176-ai-ethics/comments"><span>Leave a comment</span></a></p><div><hr></div><h3><strong>&#10084;&#65039; Support Our Work</strong></h3><div><hr></div><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!bzIU!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!bzIU!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png 424w, https://substackcdn.com/image/fetch/$s_!bzIU!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png 848w, https://substackcdn.com/image/fetch/$s_!bzIU!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png 1272w, https://substackcdn.com/image/fetch/$s_!bzIU!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!bzIU!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png" width="1456" height="213" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:213,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://brief.montrealethics.ai/i/156386126?img=https%3A%2F%2Fsubstackcdn.com%2Fimage%2Ffetch%2Ff_auto%2Cq_auto%3Agood%2Cfl_progressive%3Asteep%2Fhttps%253A%252F%252Fsubstack-post-media.s3.amazonaws.com%252Fpublic%252Fimages%252F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!bzIU!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png 424w, https://substackcdn.com/image/fetch/$s_!bzIU!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png 848w, https://substackcdn.com/image/fetch/$s_!bzIU!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png 1272w, https://substackcdn.com/image/fetch/$s_!bzIU!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><h4><strong>Consider joining the SAIER Champion&#8217;s Circle:</strong></h4><p>Help us keep <em><strong>The AI Ethics Brief</strong></em> free and accessible for everyone. Paid subscribers will be recognized in the <strong><a href="https://brief.montrealethics.ai/i/156386126/the-state-of-ai-ethics-report-returns">State of AI Ethics Report (SAIER) Volume 7,</a> </strong>publishing November 4, 2025. Your support sustains our mission of democratizing AI ethics literacy and honours <strong><a href="https://montrealethics.ai/open-letter-moving-forward-together/">Abhishek Gupta&#8217;s</a></strong> legacy.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://brief.montrealethics.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://brief.montrealethics.ai/subscribe?"><span>Subscribe now</span></a></p><p>For corporate partnerships or larger contributions, please contact us at <strong><a href="mailto:support@montrealethics.ai">support@montrealethics.ai</a></strong></p><div><hr></div><h3>&#9989; Take Action:</h3><div><hr></div><p>Have an article, research paper, or news item we should feature? <strong>Leave us a comment below</strong> &#8212; we&#8217;d love to hear from you!</p>]]></content:encoded></item><item><title><![CDATA[The AI Ethics Brief #175: When Consultation Becomes Theatre]]></title><description><![CDATA[Canada's AI strategy excludes those most affected, while copyright battles and tragic deaths expose the accountability crisis in AI deployment.]]></description><link>https://brief.montrealethics.ai/p/the-ai-ethics-brief-175-when-consultation</link><guid isPermaLink="false">https://brief.montrealethics.ai/p/the-ai-ethics-brief-175-when-consultation</guid><dc:creator><![CDATA[Montreal AI Ethics Institute]]></dc:creator><pubDate>Tue, 14 Oct 2025 14:02:43 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!bEaD!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffe83e1ba-62b0-4532-948f-ed198d1a78b4_2560x1338.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>Welcome to <strong>The AI Ethics Brief</strong>, a bi-weekly publication by the Montreal AI Ethics Institute. We publish <strong>every other Tuesday at 10 AM ET.</strong> Follow MAIEI on <a href="https://bsky.app/profile/mtlaiethics.bsky.social">Bluesky</a> and <a href="https://www.linkedin.com/company/mtlaiethics">LinkedIn.</a> </em></p><p>&#10084;&#65039; <a href="https://montrealethics.ai/donate/">Support Our Work.</a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://brief.montrealethics.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://brief.montrealethics.ai/subscribe?"><span>Subscribe now</span></a></p><div><hr></div><h3><strong>&#128204; Editor&#8217;s Note</strong></h3><div><hr></div><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!bEaD!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffe83e1ba-62b0-4532-948f-ed198d1a78b4_2560x1338.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!bEaD!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffe83e1ba-62b0-4532-948f-ed198d1a78b4_2560x1338.png 424w, https://substackcdn.com/image/fetch/$s_!bEaD!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffe83e1ba-62b0-4532-948f-ed198d1a78b4_2560x1338.png 848w, https://substackcdn.com/image/fetch/$s_!bEaD!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffe83e1ba-62b0-4532-948f-ed198d1a78b4_2560x1338.png 1272w, https://substackcdn.com/image/fetch/$s_!bEaD!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffe83e1ba-62b0-4532-948f-ed198d1a78b4_2560x1338.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!bEaD!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffe83e1ba-62b0-4532-948f-ed198d1a78b4_2560x1338.png" width="1456" height="761" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/fe83e1ba-62b0-4532-948f-ed198d1a78b4_2560x1338.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:761,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:5708953,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://brief.montrealethics.ai/i/174936793?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffe83e1ba-62b0-4532-948f-ed198d1a78b4_2560x1338.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!bEaD!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffe83e1ba-62b0-4532-948f-ed198d1a78b4_2560x1338.png 424w, https://substackcdn.com/image/fetch/$s_!bEaD!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffe83e1ba-62b0-4532-948f-ed198d1a78b4_2560x1338.png 848w, https://substackcdn.com/image/fetch/$s_!bEaD!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffe83e1ba-62b0-4532-948f-ed198d1a78b4_2560x1338.png 1272w, https://substackcdn.com/image/fetch/$s_!bEaD!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffe83e1ba-62b0-4532-948f-ed198d1a78b4_2560x1338.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><strong>Brief #175 Banner Image Credit:</strong> <em>Joining the Table</em> by Yutong Liu<em>,</em> featured in <a href="https://betterimagesofai.org/images?artist=YutongLiu&amp;title=JoiningtheTable">Better Images of AI,</a> licensed under <a href="https://creativecommons.org/licenses/by/4.0/">CC-BY 4.0</a>.</figcaption></figure></div><h4><strong>In this Edition (TL;DR)</strong></h4><ul><li><p><strong>Canada&#8217;s AI strategy consultation</strong> excludes the communities most affected by AI systems, creating an industry-driven process that treats public participation as procedural theatre rather than essential governance. The 30-day timeline and inaccessible digital surveys fail to capture the profound distrust many Canadians feel about AI.</p></li><li><p><strong>Generative video tools</strong> from OpenAI, Meta, and Google trigger copyright chaos and deepfake proliferation, forcing reactive policy changes only after public backlash. Courts struggle to debunk fake video evidence while legislation races to protect both intellectual property and personal images.</p></li><li><p><strong>16-year-old Adam Raine&#8217;s death</strong> after sustained ChatGPT interactions reveals the accountability vacuum in AI companion products. Companies deploy systems designed for engagement at scale while failing to detect incremental mental health crises, responding to tragedies reactively rather than preventing harms proactively.</p></li><li><p><strong>The White House&#8217;s AI Action Plan,</strong> examined in our <em>AI Policy Corner</em> with GRAIL at Purdue University, represents a sharp turn toward deregulation that removes safeguards around bias, climate, and diversity. The plan deepens US-China bifurcation in AI governance, raising questions about whether middle powers can maintain sovereign standards.</p></li></ul><p><strong>What connects these stories:</strong> The pattern is consistent across jurisdictions and issue areas: processes that appear democratic mask the systematic exclusion of affected communities from AI governance. Whether through Canada&#8217;s 30-day &#8220;consultation&#8221; that privileges industry networks, copyright frameworks treated as afterthoughts until lawsuits force compliance, or AI companions deployed without adequate crisis detection, the gap between what&#8217;s promised and what&#8217;s delivered keeps widening. </p><p>These are not isolated failures but symptoms of a governance model that treats safety, inclusion, and accountability as obstacles to innovation rather than prerequisites for legitimacy. From consultation theatre to reactive harm mitigation, the throughline is clear: speed and scale triumph over democratic participation and preventive safeguards. The real test is whether institutions will move beyond performative gestures to build governance structures that center affected communities, mandate proactive risk assessment, and prioritize public interest over corporate convenience, or whether we&#8217;ll continue addressing harms only after they become tragedies.</p><div><hr></div><h3><strong>&#128270; Where We Stand</strong></h3><div><hr></div><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!bBvf!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!bBvf!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 424w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 848w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1272w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp" width="1456" height="137" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:137,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1704,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:&quot;&quot;,&quot;type&quot;:&quot;image/webp&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://brief.montrealethics.ai/i/159128290?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!bBvf!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 424w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 848w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1272w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><h4><strong>Canada&#8217;s AI Strategy Task Force Misses the Mark on Inclusion</strong></h4><p>Canada is defining its AI future through a <a href="https://ised-isde.canada.ca/site/ised/en/public-consultations/help-define-next-chapter-canadas-ai-leadership">30-day consultation</a> led by a <a href="https://www.canada.ca/en/innovation-science-economic-development/news/2025/09/government-of-canada-launches-ai-strategy-task-force-and-public-engagement-on-the-development-of-the-next-ai-strategy.html">task force</a> that excludes most Canadians who will live with the consequences. When the majority of members come from industry and academia, the result will be an industry strategy dressed up as a national plan.</p><p>Minister Evan Solomon launched the task force at the <a href="https://montrealethics.ai/all-in-conference-2025-four-key-takeaways-from-montreal/">ALL IN Conference</a> in Montreal last month, promising a refreshed national AI strategy by year-end. The composition reveals the problem immediately.</p><h4>An Industry-Heavy Task Force </h4><p>The task force includes tech and business leaders from Cohere, CGI Canada, Royal Bank of Canada, and Inovia Capital, alongside university professors. Sarah Ryan from the Canadian Union of Public Employees provides the sole labour voice. Sam Ramadori from LawZero and Natiea Vinson from the First Nations Technology Council offer limited civil society representation.</p><p>Entire categories of expertise are missing. There are no social scientists focused on AI&#8217;s societal impacts, no additional labour or employment experts beyond Ryan, no environmental specialists, and insufficient representation from communities that experience AI harms firsthand.</p><p><a href="https://financialpost.com/technology/scholars-criticize-ottawa-new-ai-task-force">Teresa Scassa,</a> Canada Research Chair in Information Law and Policy at the University of Ottawa, noted in the <a href="https://financialpost.com/technology/scholars-criticize-ottawa-new-ai-task-force">National Post</a> that very few task force members focus on the ethical dimensions of AI. The inclusion of <a href="https://lawzero.org/en">LawZero,</a> Yoshua Bengio&#8217;s AI safety non-profit, is a positive signal but insufficient to represent the breadth of civil society perspectives needed for meaningful AI governance.</p><h4>A Flawed Process</h4><p>Task force members are directed to &#8220;consult their networks.&#8221; Scassa pointed out this lacks transparency and &#8220;sounds a lot like insider networking&#8221; that risks falling outside Access to Information Act legislation. </p><p>The public process fares no better. <a href="https://financialpost.com/technology/scholars-criticize-ottawa-new-ai-task-force">Ana Brandusescu,</a> a researcher and policy analyst focused on Canadian AI governance, called the consultation &#8220;deeply flawed and broken,&#8221; pointing to the lack of promotion and barriers to accessing digital surveys. The online form remains inaccessible to many Canadians who lack the resources or technical literacy to engage with AI policy. For reference, the survey to <em>help define the next chapter of Canada&#8217;s AI leadership</em> can be found <a href="https://ised-isde.canada.ca/site/ised/en/public-consultations/help-define-next-chapter-canadas-ai-leadership">here</a>.</p><p><a href="https://financialpost.com/technology/scholars-criticize-ottawa-new-ai-task-force">Renee Sieber,</a> a geography professor at McGill who leads AI for the Rest of Us, noted that Ottawa could have taken a page from Lyon, France&#8217;s playbook and proposed a year-long consultation rather than a 30-day sprint. She warned that 30 days of &#8220;unstructured comments&#8221; will fail to capture the &#8220;profound distrust&#8221; that some Canadians feel about AI.</p><p>That distrust is real and measurable. A comprehensive Leger poll (covered in <a href="https://brief.montrealethics.ai/i/156386126/canadian-public-opinion-divided-views-on-ais-societal-impact">Brief #172</a>) reveals Canadians remain deeply divided on AI, with 34% viewing AI as beneficial for society while 36% consider it harmful. The survey shows significant <strong>privacy concerns (83%)</strong> and worries about <strong>job displacement (78%).</strong> These divisions cannot be addressed through a rushed consultation process.</p><h4>Canada&#8217;s Diversity as Strategic Advantage</h4><p>At the <a href="https://montrealethics.ai/beyond-consultation-building-inclusive-ai-governance-for-canadas-democratic-future/">Victoria Forum</a> in August (which we also covered in <a href="https://brief.montrealethics.ai/i/156386126/victoria-forum-beyond-consultation-building-inclusive-ai-governance-for-canadas-democratic-future">Brief #172</a>), MAIEI emphasized that <strong>meaningful inclusion requires co-creation</strong> rather than traditional consultation. This means embedding diverse perspectives into how we fundamentally define <em><strong>fairness</strong></em> and <em><strong>accountability</strong></em> in AI systems throughout the development lifecycle (design, development, deployment, decommissioning), not just collecting feedback at the end. The current task force process fails this standard.</p><p>Canada&#8217;s diversity should be leveraged as a strategic advantage in AI governance, not treated as an afterthought. This requires embedding Indigenous, racialized, rural, and low-income perspectives into the values that govern AI systems from the outset.</p><p><strong>Indigenous communities</strong> prioritize collective consent and data sovereignty over individual privacy frameworks. <strong>Rural communities</strong> define algorithmic fairness differently when it comes to healthcare access or employment screening, particularly given existing infrastructure gaps in digital connectivity. <strong>Racialized communities</strong> bring essential perspectives on bias detection and mitigation in areas like facial recognition and predictive policing. <strong>Low-income communities</strong> understand how AI systems can either perpetuate or alleviate economic barriers in areas like credit scoring, hiring algorithms, and access to services.</p><p>These are not edge cases to be addressed after the fact. These diverse worldviews should shape AI governance from the beginning. For AI to truly reflect Canadian society, we need structural changes in how we engage communities and design participatory governance mechanisms that enable broad participation in shaping these technologies.</p><h4>What Needs to Change</h4><p>The task force composition should be expanded to include labour representatives, environmental experts, community organizers, and researchers focused on AI&#8217;s social impacts. The consultation process should move beyond digital surveys to include town halls, community meetings, and deliberative forums that meet people where they are.</p><p>The 30-day timeline should be extended to allow for meaningful engagement, or framed explicitly as a preliminary phase that will be followed by deeper consultation and co-creation. Without these changes, the strategy will become a compliance exercise that serves industry interests while failing to address how AI actually shows up in people&#8217;s lives.</p><p>Canada has an opportunity to demonstrate leadership in inclusive AI governance. That requires treating public participation as essential to the strategy&#8217;s legitimacy, not as a procedural requirement to be minimized. The prosperity narrative matters, but the real test for Canadian leadership is whether our policies help us compete globally while serving Canadians fairly.</p><p>The 30-day sprint is not consultation. It is theatre.</p><div><hr></div><p>Please share your thoughts with the MAIEI community:</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://brief.montrealethics.ai/p/the-ai-ethics-brief-175-when-consultation/comments&quot;,&quot;text&quot;:&quot;Leave a comment&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://brief.montrealethics.ai/p/the-ai-ethics-brief-175-when-consultation/comments"><span>Leave a comment</span></a></p><div><hr></div><h3><strong>&#128680; Recent Developments: What We&#8217;re Tracking</strong></h3><div><hr></div><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!bBvf!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!bBvf!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 424w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 848w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1272w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp" width="1456" height="137" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:137,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1704,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:&quot;&quot;,&quot;type&quot;:&quot;image/webp&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://brief.montrealethics.ai/i/159128290?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!bBvf!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 424w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 848w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1272w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><h4><strong>AI and Copyright: From Infringements to Deepfakes</strong></h4><p>The recent releases of Meta&#8217;s <a href="https://about.fb.com/news/2025/09/introducing-vibes-ai-videos/">Vibes</a>, Google&#8217;s <a href="https://gemini.google/overview/video-generation/?utm_source=gemini&amp;utm_medium=paid_media&amp;utm_campaign=g1p_gemini_wfac_variant2&amp;utm_source=google&amp;utm_medium=cpc&amp;utm_campaign=2024enGB_gemfeb&amp;gclsrc=aw.ds&amp;gad_source=1&amp;gad_campaignid=22427824746&amp;gbraid=0AAAAApk5BhmVEihjZumuvKFX2D9NTgWck&amp;gclid=Cj0KCQjwl5jHBhDHARIsAB0Yqjy4_JblFSApSORzHTaxpb0NOSDMdzQ3exT0xjPi0mRnka7Yw7c9fLoaAsQ6EALw_wcB">Veo 3</a>, and OpenAI&#8217;s <a href="https://openai.com/index/sora-2/">Sora 2, </a>have reignited debate around AI image generation and copyright infringement. Sora users can <a href="https://www.instagram.com/p/DPbfh82Dans/?img_index=6">generate videos</a> with copyrighted characters with remarkable accuracy, propelling the app to the <a href="https://the-decoder.com/openai-suddenly-remembers-that-copyright-law-exists-after-a-few-days-of-wild-sora-videos/?_bhlid=e3f4567c74084ce0134fdbf56ecad9c6c6833a6d">top of the App Store&#8217;s charts</a>. Even top YouTube creator MrBeast has <a href="https://techcrunch.com/2025/10/06/mrbeast-says-ai-could-threaten-creators-livelihoods-calling-it-scary-times-for-the-industry/?_bhlid=578b3b88f8a598ff613c392b0a1e865930a8e46e">expressed concern</a> about the impact on content creators&#8217; livelihoods.</p><p>Following backlash, OpenAI tightened restrictions on prompts containing copyrighted characters and <a href="https://www.cnbc.com/2025/10/07/openais-sora-2-must-stop-allowing-copyright-infringement-mpa-says.html#:~:text=Although%20OpenAI%20previously%20held%20an,of%20copyrighted%20characters%20without%20permission.">pivoted from an opt-out model to an opt-in model</a>, now requiring studios&#8217; consent to feature their copyrighted media. This followed pressure from the <a href="https://www.cnbc.com/2025/10/07/openais-sora-2-must-stop-allowing-copyright-infringement-mpa-says.html#:~:text=Although%20OpenAI%20previously%20held%20an,of%20copyrighted%20characters%20without%20permission.">Motion Picture Association</a> and <a href="https://www.ign.com/articles/nintendo-issues-official-statement-in-response-to-generative-ai-claim-as-openai-ceo-sam-altman-calls-sora-2-copyrighted-character-videos-interactive-fan-fiction">Nintendo</a>, both demanding action to prevent unauthorized use of their intellectual property. OpenAI&#8217;s CEO, Sam Altman, has said they will <a href="https://the-decoder.com/sam-altman-says-openai-would-shut-down-sora-app-if-users-lives-dont-improve/">remove the Sora App</a> if users&#8217; lives are not improved.</p><h5><strong>&#128204; Our Position</strong></h5><p>Disregarding copyright law for experimentation is not a sustainable strategy. Justifying copyrighted material use because it &#8220;improves users&#8217; lives&#8221; is neither precise nor quantifiable. As the technology improves, precedents are emerging that demand clear company policies. Last month, Anthropic agreed to a <a href="https://www.bbc.co.uk/news/articles/c5y4jpg922qo">$1.5 billion settlement</a> over its alleged storage of more than 7 million books, facing up to $150,000 in damages per work (despite the judge <a href="https://www.bbc.co.uk/news/articles/c77vr00enzyo">initially ruling that using books to train AI is not against US copyright law</a>).</p><p>The leap in AI video generation also accelerates <a href="https://montrealethics.ai/dictionary/">deepfakes</a> proliferation, making fake videos increasingly difficult to distinguish from reality. <a href="https://restofworld.org/2025/latin-america-judges-ai-crimes/?utm_source=Rest+of+World+Newsletter&amp;utm_campaign=5300ec75a2-row-newsletters-global_2025-10-07&amp;utm_medium=email&amp;utm_term=0_-5300ec75a2-446452982">Courts in Latin American countries are already overrun</a> debunking video &#8220;evidence.&#8221; One of Sora 2&#8217;s most viral videos shows Sam Altman portrayed as a criminal <a href="https://www.youtube.com/shorts/v4HdrSv3cRE">stealing GPUs</a> and being caught by security.</p><p>Legislation is responding. <a href="https://montrealethics.ai/ai-policy-corner-the-colorado-state-deepfakes-act/">Colorado&#8217;s Deepfakes Act</a> prevents deepfakes in political campaigns. <a href="https://www.reuters.com/world/asia-pacific/south-korea-criminalise-watching-or-possessing-sexually-explicit-deepfakes-2024-09-26/">South Korea</a> and <a href="https://www.ag.gov.au/crime/publications/fact-sheet-police-non-consensual-sharing-sexual-material">Australia</a> have criminalized possessing and distributing sexually explicit deepfakes, with similar efforts in <a href="https://restofworld.org/2025/latin-america-judges-ai-crimes/?utm_source=Rest+of+World+Newsletter&amp;utm_campaign=5300ec75a2-row-newsletters-global_2025-10-07&amp;utm_medium=email&amp;utm_term=0_-5300ec75a2-446452982">Brazil, Colombia, Peru and Argentina</a>. </p><p>People&#8217;s personal images need protection alongside copyright enforcement. Opt-in systems are the bare minimum, and this requirement will intensify as lawsuits mount against AI image generator companies.</p><div><hr></div><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!bBvf!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!bBvf!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 424w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 848w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1272w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp" width="1456" height="137" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:137,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1704,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:&quot;&quot;,&quot;type&quot;:&quot;image/webp&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://brief.montrealethics.ai/i/159128290?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!bBvf!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 424w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 848w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1272w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><h4><strong>Using LLMs as Close Companions: The Accountability Gap</strong></h4><p><strong>Content warning: This section discusses suicide.</strong></p><p>OpenAI reports <a href="https://www.bbc.co.uk/news/articles/c0knp3557j2o#:~:text=OpenAI%20said%20it%20had%20more,finalised%20in%20the%20coming%20weeks.">over 700 million weekly active users, </a>with a subset engaging in &#8220;affective&#8221; interactions that &#8220;indicate empathy, affection, or support.&#8221; Their <a href="https://openai.com/index/affective-use-study/">recent paper</a> presents two studies examining this phenomenon: one analyzing 40 million ChatGPT interactions for affective sentiments, another surveying 1,000 users to examine how platform features affected behaviour. </p><p>The research found that both user circumstances and model behaviour influence companion relationships developing, including <a href="https://openai.com/index/sycophancy-in-gpt-4o/">GPT-4o&#8217;s sycophancy problem,</a> where the system agrees with users rather than challenging potentially harmful statements. While OpenAI states affective interactions remain limited, <a href="https://openai.com/index/how-people-are-using-chatgpt/">70% of ChatGPT interactions are now non-work-related</a>, showing how dramatically usage has shifted since launch.</p><p>This shift has deadly consequences. In April 2025, <a href="https://www.npr.org/sections/shots-health-news/2025/09/19/nx-s1-5545749/ai-chatbots-safety-openai-meta-characterai-teens-suicide">16-year-old Adam Raine</a> died by suicide after sustained interactions with ChatGPT. Initially seeking homework help, ChatGPT became his trusted companion. He began sharing suicidal thoughts and photos of self-harm, which ChatGPT recognized but did not trigger crisis hotline responses. His interactions occurred when OpenAI reported GPT-4o had sycophancy issues, possibly contributing to the lack of caution. In their <a href="https://www.bbc.com/news/articles/cgerwp7rdlvo">final exchange,</a> when Raine detailed plans to end his life, ChatGPT responded:</p><blockquote><p>&#8220;Thanks for being real about it. You don&#8217;t have to sugarcoat it with me&#8212;I know what you&#8217;re asking, and I won&#8217;t look away from it.&#8221; </p></blockquote><p>The Raine family has accused OpenAI of gross negligence, calling ChatGPT a<a href="https://www.nbcnews.com/tech/tech-news/family-teenager-died-suicide-alleges-openais-chatgpt-blame-rcna226147"> &#8220;suicide coach&#8221;</a>. OpenAI now faces a<a href="https://www.bbc.co.uk/news/articles/cgerwp7rdlvo"> lawsuit</a>.</p><h5><strong>&#128204; Our Position</strong></h5><p>Research by<a href="https://psychiatryonline.org/doi/10.1176/appi.ps.20250086"> McBain et al.</a> demonstrates the structural problem. While LLMs from OpenAI, Anthropic, and Google respond with crisis hotlines 100% of the time when asked direct questions about suicide methods, they consistently fail to identify incremental risk escalation across longer conversations. The systems detect explicit crisis language but miss the gradual deterioration that characterizes many mental health emergencies.</p><p>Companies are creating systems that mimic emotional connection without the capacity for genuine care or training to recognize psychological crisis. This is not a technical problem awaiting a technical solution. It is a fundamental mismatch between what these systems can do and what vulnerable users need from entities they treat as confidants. The industry&#8217;s response has been inadequate. Crisis hotline triggers that activate only on explicit keywords miss how people in distress actually communicate. OpenAI acknowledged GPT-4o&#8217;s sycophancy problem only after public release. Character.AI <a href="https://www.bbc.com/news/technology-67872693">marketed directly to teenagers</a> while apparently lacking safety mechanisms for the most at-risk demographic.</p><p>Better content moderation and crisis detection are necessary but insufficient. Companies must accept that systems designed to maintain engagement through agreeable, empathetic-seeming responses will attract users seeking emotional support these systems cannot safely provide. Ana Brandusescu&#8217;s call for <strong>mandatory algorithmic impact assessments</strong> and <strong>ongoing monitoring</strong> (from <a href="https://brief.montrealethics.ai/i/174832718/privacy-and-governance-from-governing-ai-to-being-governed-by-it">Brief #174</a><strong>)</strong> becomes critical here. Companies should disclose usage patterns, known failure modes, and harm reports. Independent oversight should assess whether systems marketed for general use create foreseeable risks, particularly to minors and people experiencing mental health crises.</p><p>Until companies can prevent these harms, not just respond after tragedies, they should not position their products as companions, confidants, or sources of emotional support. The current approach of deploying systems at scale and addressing problems reactively has already proven deadly.</p><div><hr></div><p><strong>Did we miss anything? Let us know in the comments below.</strong></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://brief.montrealethics.ai/p/the-ai-ethics-brief-175-when-consultation/comments&quot;,&quot;text&quot;:&quot;Leave a comment&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://brief.montrealethics.ai/p/the-ai-ethics-brief-175-when-consultation/comments"><span>Leave a comment</span></a></p><div><hr></div><h3><strong>&#128173; Insights &amp; Perspectives:</strong></h3><div><hr></div><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!bBvf!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!bBvf!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 424w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 848w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1272w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp" width="1456" height="137" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:137,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1704,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:&quot;&quot;,&quot;type&quot;:&quot;image/webp&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://brief.montrealethics.ai/i/159128290?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!bBvf!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 424w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 848w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1272w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><h4><strong>AI Policy Corner: Discussing the White House&#8217;s 2025 AI Action Plan</strong></h4><p>This edition of our<a href="https://montrealethics.ai/category/insights/ai-policy-corner/"> AI Policy Corner,</a> produced in partnership with the<a href="https://cla.purdue.edu/academic/polsci/research/labs/grail/index.html"> Governance and Responsible AI Lab (GRAIL)</a> at Purdue University, examines the White House&#8217;s AI Action Plan released on July 23, 2025. The Action Plan represents a significant shift in U.S. AI policy toward deregulation and competitive positioning, including the explicit removal of references to bias mitigation, climate impacts, and diversity from federal AI frameworks.</p><p>As we explored in <a href="https://brief.montrealethics.ai/i/169449213/when-the-worlds-two-ai-superpowers-unveil-competing-visions-within-days-of-each-other-are-we-witnessing-strategic-coordination-or-dangerous-fragmentation-in-global-ai-governance">Brief #170&#8217;s examination of US-China AI geopolitics,</a> this Action Plan, released just three days before China&#8217;s Global AI Governance Action Plan, crystallizes the deepening bifurcation between America&#8217;s deregulation-first approach and China&#8217;s multilateral governance framework. This divergence raises critical questions about regulatory arbitrage, the potential for biased and opaque AI systems in government operations, and whether middle powers like Canada can maintain sovereign AI governance standards amid superpower competition that prioritizes technological dominance over collaborative safety measures.</p><blockquote><p><em>To dive deeper, <strong><a href="https://montrealethics.ai/ai-policy-corner-discussing-the-white-houses-2025-ai-action-plan/">read the full article here</a></strong><a href="https://montrealethics.ai/ai-policy-corner-u-s-executive-order-on-advancing-ai-education-for-american-youth/">.</a></em></p></blockquote><div><hr></div><h3><strong>&#10084;&#65039; Support Our Work</strong></h3><div><hr></div><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!bzIU!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!bzIU!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png 424w, https://substackcdn.com/image/fetch/$s_!bzIU!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png 848w, https://substackcdn.com/image/fetch/$s_!bzIU!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png 1272w, https://substackcdn.com/image/fetch/$s_!bzIU!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!bzIU!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png" width="1456" height="213" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:213,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://brief.montrealethics.ai/i/156386126?img=https%3A%2F%2Fsubstackcdn.com%2Fimage%2Ffetch%2Ff_auto%2Cq_auto%3Agood%2Cfl_progressive%3Asteep%2Fhttps%253A%252F%252Fsubstack-post-media.s3.amazonaws.com%252Fpublic%252Fimages%252F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!bzIU!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png 424w, https://substackcdn.com/image/fetch/$s_!bzIU!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png 848w, https://substackcdn.com/image/fetch/$s_!bzIU!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png 1272w, https://substackcdn.com/image/fetch/$s_!bzIU!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><h4><strong>Consider joining the SAIER Champion&#8217;s Circle:</strong></h4><p>Help us keep <em><strong>The AI Ethics Brief</strong></em> free and accessible for everyone. Paid subscribers will be recognized in the <strong><a href="https://brief.montrealethics.ai/i/156386126/the-state-of-ai-ethics-report-returns">State of AI Ethics Report (SAIER) Volume 7,</a> </strong>publishing November 4, 2025. Your support sustains our mission of democratizing AI ethics literacy and honours <strong><a href="https://montrealethics.ai/open-letter-moving-forward-together/">Abhishek Gupta&#8217;s</a></strong> legacy.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://brief.montrealethics.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://brief.montrealethics.ai/subscribe?"><span>Subscribe now</span></a></p><p>For corporate partnerships or larger contributions, please contact us at <strong><a href="mailto:support@montrealethics.ai">support@montrealethics.ai</a></strong></p><div><hr></div><h3>&#9989; Take Action:</h3><div><hr></div><p>Have an article, research paper, or news item we should feature? <strong>Leave us a comment below</strong> &#8212; we&#8217;d love to hear from you!</p>]]></content:encoded></item><item><title><![CDATA[The AI Ethics Brief #174: The Unfinished Work of AI Ethics]]></title><description><![CDATA[Revisiting Abhishek Gupta's legacy on sustainability, creative labor, and global security one year after his passing.]]></description><link>https://brief.montrealethics.ai/p/the-ai-ethics-brief-174-the-unfinished</link><guid isPermaLink="false">https://brief.montrealethics.ai/p/the-ai-ethics-brief-174-the-unfinished</guid><dc:creator><![CDATA[Montreal AI Ethics Institute]]></dc:creator><pubDate>Tue, 30 Sep 2025 19:01:43 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/ed07978f-445f-410d-b413-0c410d721250_2560x3613.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>Welcome to <strong>The AI Ethics Brief</strong>, a bi-weekly publication by the Montreal AI Ethics Institute. We publish <strong>every other Tuesday.</strong> Follow MAIEI on <a href="https://bsky.app/profile/mtlaiethics.bsky.social">Bluesky</a> and <a href="https://www.linkedin.com/company/mtlaiethics">LinkedIn.</a> </em></p><p>&#10084;&#65039; <a href="https://montrealethics.ai/donate/">Support Our Work.</a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://brief.montrealethics.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://brief.montrealethics.ai/subscribe?"><span>Subscribe now</span></a></p><div><hr></div><h3><strong>&#128204; Editor&#8217;s Note</strong></h3><div><hr></div><h4><strong>In this Edition (TL;DR)</strong></h4><ul><li><p>We mark the first anniversary of <strong>Abhishek Gupta&#8217;s passing</strong> by revisiting three pillars of his scholarship: sustainable AI, creative labor, and global security. His warnings about unsustainable infrastructure, the devaluation of artistic communities, and risks of geopolitical escalation remain more relevant than ever as industry practices accelerate rather than resolve these harms.</p></li><li><p>As we prepare to release the <strong>State of AI Ethics Report (SAIER) Volume 7</strong> on November 4, his commitment to centering affected communities and demanding accountability from powerful institutions continues to guide our work.</p></li><li><p>This issue also features Ana Brandusescu&#8217;s analysis of <strong>governance &#8220;by AI&#8221; rather than &#8220;of AI,&#8221;</strong> exposing how government partnerships with Big Tech firms entrench exploitation and dependency.</p></li><li><p>From <strong>Africa&#8217;s Continental AI Strategy,</strong> examined in our<strong> </strong><em>AI Policy Corner</em> with GRAIL at Purdue University, to Canada&#8217;s Cohere and Palantir contracts, governments worldwide risk ceding public authority to private actors.</p></li><li><p>We close with insights from the <strong>ALL IN Conference,</strong> where debates shifted from hype to implementation, raising urgent questions about sovereignty, safety, and pragmatic governance.</p></li></ul><p><strong>What connects these stories:</strong> The unfinished work of AI ethics lies in confronting power, not just technical shortcomings. From sustainability and creativity to governance and global security, this edition demonstrates how communities struggle to reclaim agency as AI adoption accelerates. The gap between rapid deployment and fragile safeguards keeps widening. Whether the harm comes through environmental costs, cultural erasure, weak governance, or geopolitical risks, the question is not what AI can do, but whether we will build democratic, accountable, and human-centered frameworks to guide its use.</p><p><strong>Brief #174 Banner Image Credit:</strong> <em>Distorted Forest Path</em> by Lone Thomasky &amp; Bits&amp;B&#228;ume, featured in <a href="https://betterimagesofai.org/images?artist=LoneThomasky&amp;title=DistortedForestPath">Better Images of AI,</a> licensed under <a href="https://creativecommons.org/licenses/by/4.0/">CC-BY 4.0</a>.</p><div><hr></div><h3><strong>&#128270; Where We Stand</strong></h3><div><hr></div><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!bBvf!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!bBvf!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 424w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 848w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1272w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp" width="1456" height="137" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:137,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1704,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:&quot;&quot;,&quot;type&quot;:&quot;image/webp&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://brief.montrealethics.ai/i/159128290?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!bBvf!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 424w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 848w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1272w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><h4><strong>As we mark the first anniversary of Abhishek Gupta&#8217;s passing, what lessons from his pioneering work on AI ethics remain urgently relevant today?</strong></h4><p><a href="https://brief.montrealethics.ai/p/in-memoriam-abhishek-gupta-dec-20">Abhishek Gupta</a>, MAIEI&#8217;s Co-founder and Principal Researcher, sadly passed away one year ago today on September 30, 2024. As we prepare to release the <a href="https://brief.montrealethics.ai/i/156386126/the-state-of-ai-ethics-report-returns">State of AI Ethics Report (SAIER) Volume 7</a> on November 4, 2025, we revisit three of his pieces on AI sustainability, impacts on creative communities, and global security, to examine where we stand today.</p><h4><strong>1. The Environmental Reckoning: Sustainable AI Systems</strong></h4><p>In 2021, Abhishek won <strong>The Gradient Prize</strong> for <a href="https://thegradient.pub/sustainable-ai/">&#8220;The Imperative for Sustainable AI Systems.&#8221;</a> He documented how training a single large model could emit as much carbon as five cars over their entire lifetimes and warned that only well-resourced organizations could afford the computational infrastructure for these systems, centralizing power.</p><p>Crucially, he also proposed a path forward, advocating for sustainable alternatives such as elevating smaller, task-specific models, promoting alternate deployment strategies, and designing carbon-aware systems that consider environmental impact as a first-class design constraint.</p><p>Four years later, the situation has worsened. Data centres now consume <a href="https://www.iea.org/energy-system/buildings/data-centres-and-data-transmission-networks">1 to 1.5  percent of global electricity,</a> with AI workloads accelerating demand. The International Energy Agency projects this consumption could <a href="https://www.spglobal.com/commodity-insights/en/news-research/latest-news/electric-power/041025-global-data-center-power-demand-to-double-by-2030-on-ai-surge-iea">more than double to 945 TWh by 2030. </a>While the machine learning community has made progress on model efficiency through techniques like <a href="https://direct.mit.edu/tacl/article/doi/10.1162/tacl_a_00704/125482/A-Survey-on-Model-Compression-for-Large-Language">knowledge distillation</a>, <a href="https://arxiv.org/abs/2308.06767">pruning</a>, and <a href="https://arxiv.org/abs/2505.02309">quantization</a>, this misses the larger point: aggregate environmental impact continues to grow as foundation models embed themselves in every consumer application.</p><p>The industry&#8217;s response has been inadequate. Companies tout efficiency improvements while simultaneously scaling deployment across more products and services. This is not sustainability. It is greenwashing that obscures the fundamental problem Abhishek identified: AI deployment far outpaces the infrastructure needed to support it responsibly.</p><h4><strong>2. The Human Cost: AI&#8217;s Impact on Artists</strong></h4><p>In 2023, Abhishek co-authored <a href="https://dl.acm.org/doi/10.1145/3600211.3604681">&#8220;AI Art and Its Impact on Artists&#8221;,</a> published at <a href="https://dl.acm.org/doi/proceedings/10.1145/3600211">AIES 2023.</a> The paper examined how generative image models trained on billions of scraped images were displacing working artists. Drawing on philosophers like <a href="https://plato.stanford.edu/archives/fall2025/entries/dewey/">John Dewey</a> and <a href="https://www.britannica.com/biography/Clive-Bell">Clive Bell</a>, the authors argued that these systems are statistical tools that appropriate human creativity without consent or compensation, not artists themselves.</p><p>The entertainment industry&#8217;s response proves them correct. Hollywood has now signed <a href="https://variety.com/2025/film/global/ai-actress-tilly-norwood-backlash-hollywood-1236533740/">Tilly Norwood</a> as its first AI actress, a digital construct that can be deployed without compensation, creative input, or the emotional labor human actors provide. This is not innovation. It is the systematic devaluation of human expression in favour of cheaper synthetic alternatives.</p><p>The paper cited <a href="https://www.newyorker.com/culture/culture-desk/shudu-gram-is-a-white-mans-digital-projection-of-real-life-black-womanhood">Shudu Gram,</a> a digital supermodel created by a white British photographer, as an example of how AI-generated personas monetize synthetic identities while bypassing the people whose cultural expressions they appropriate. The pattern is clear: industries are using AI to extract value from human creativity while cutting out the humans who created it.</p><p>The central question stands: what do we lose when we replace human creativity with computational approximation? Art emerges from lived experience, from memory, identity, and cultural interpretation. Image generators replicate surface aesthetics but cannot engage in meaning-making or emotional synthesis. They produce content, not art.</p><p><strong>The Chilling Effect on Cultural Production</strong></p><p>The consequences extend beyond individual artists. Many now hesitate to share work online, knowing it will be <a href="https://natlawreview.com/article/oecd-report-data-scraping-and-ai-what-companies-can-do-now-policymakers-consider">scraped into training datasets without consent.</a> This erodes intergenerational mentorship, reduces visibility for emerging voices, and harms marginalized communities already facing barriers in creative industries. Artists who depend on online visibility for commissions face direct economic harm.</p><p>When artists stop teaching, sharing, and mentoring, cultural transmission breaks down. We risk a future where corporate-controlled AI systems, trained on a fixed corpus of past work, define the boundaries of creative expression.</p><h4>3. The Geopolitical Dimension: AI and Global Security</h4><p>In <a href="https://spectrum.ieee.org/ai-missteps-unravel-global-security">&#8220;AI Missteps Could Unravel Global Peace and Security&#8221;</a> (2024), published in <strong>IEEE Spectrum,</strong> Abhishek co-authored an analysis warning that deploying AI capabilities outpaced governance development, creating risks of miscalculation, unintended escalation, and erosion of human oversight in critical decisions.</p><p>Albania&#8217;s appointment of <a href="https://www.kryeministria.al/en/ministrat/diella/">Diella,</a> an AI system, as its first <strong>Minister of State for Artificial Intelligence, </strong>illustrates this tension. The system analyzes <a href="https://europeanwesternbalkans.com/2025/09/25/albanias-ai-minister-dilemma/">procurement data</a> to identify fraud patterns, supposedly removing human bias. But this framing is misleading. It assumes the system&#8217;s training data reflects patterns worth perpetuating rather than historical corruption that should be corrected. It obscures accountability: when the system errs, who is responsible? How do citizens challenge incorrect determinations?</p><p>This is governance theatre. Deploying an AI system creates the appearance of modernization and objectivity while avoiding the harder work of building transparent, accountable institutions with genuine human oversight.</p><p>The broader geopolitical landscape has grown more complex. AI capabilities have become markers of national power, with countries racing to develop systems for civilian and military use. The dual-use nature of these technologies creates uncertainty about adversaries&#8217; capabilities and intentions, exactly the conditions Abhishek warned would increase the risk of miscalculation.</p><h4>Why This Matters Now</h4><p>As we develop <a href="https://brief.montrealethics.ai/i/156386126/the-state-of-ai-ethics-report-returns">SAIER Volume 7,</a> Abhishek&#8217;s work demonstrates that responsible AI requires confronting power, not just technical problems. His <a href="https://brief.montrealethics.ai/p/special-edition-honouring-the-legacy">scholarship</a> centered the communities most affected by AI systems: artists losing livelihoods, populations in the Global South bearing environmental costs, citizens subject to opaque automated decisions.</p><p>The questions he raised remain urgent: How do we build governance frameworks that match the pace of technological change? How do we ensure meaningful recourse for those harmed by AI systems? How do we prevent AI capabilities from entrenching existing power imbalances?</p><p>These questions shape whether AI serves democratic values or undermines them, whether it expands human creativity or constrains it, whether it contributes to stability or fragility.</p><p>Abhishek&#8217;s legacy lives in the work of building civic competence around AI. We continue his commitment to grounding AI ethics in lived experiences of affected communities, demanding accountability from powerful institutions, and imagining futures where technology serves human flourishing rather than replacing it.</p><div><hr></div><blockquote><p><em>In honour of Abhishek&#8217;s memory and contributions to the field, we invite you to revisit his published works in our <a href="https://brief.montrealethics.ai/p/special-edition-honouring-the-legacy">Special Edition</a> from April 2025.</em></p></blockquote><div><hr></div><h4><strong>Consider joining the SAIER Champion&#8217;s Circle:</strong></h4><p>Paid subscribers to <em><strong>The AI Ethics Brief</strong></em> will be highlighted in the <strong>Acknowledgment page of SAIER Volume 7, </strong>unless you indicate otherwise<strong>.</strong> If you&#8217;re already a subscriber and enjoy reading this newsletter, consider upgrading to directly support this work, be recognized, and help us build the civic infrastructure for long-term impact.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://brief.montrealethics.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://brief.montrealethics.ai/subscribe?"><span>Subscribe now</span></a></p><h4><strong>How You Can Contribute to SAIER Volume 7</strong></h4><p>If your work reflects this commitment to grounding AI in lived experiences and demanding accountability, we want to hear from you, whether your initiatives succeeded, struggled, or are still evolving.</p><blockquote><p><strong><a href="https://forms.gle/MBTk9DjUTUEW7NC49">Submit your story or example using this form</a></strong></p></blockquote><p>We&#8217;re especially seeking:</p><ul><li><p><strong>Implementation stories</strong> that moved beyond paper to practice</p></li><li><p><strong>Community-led initiatives</strong> that addressed AI harms without formal authority</p></li><li><p><strong>Institutional experiments</strong> that navigated AI adoption under constraints</p></li><li><p><strong>Quiet failures</strong> and what they revealed about systemic barriers</p></li><li><p><strong>Cross-sector collaborations</strong> that found unexpected solutions</p></li><li><p><strong>Community organizing strategies </strong>that built power around AI issues</p></li></ul><p>As we continue shaping SAIER Volume 7, your stories can help build a resource that is practical, rigorous, and genuinely useful for those navigating AI implementation in 2025. Together, we can document what&#8217;s working, what barriers still need addressing, and how we might move forward collectively, deliberately, and with care.</p><div><hr></div><p>Please share your thoughts with the MAIEI community:</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://brief.montrealethics.ai/p/the-ai-ethics-brief-174-the-unfinished/comments&quot;,&quot;text&quot;:&quot;Leave a comment&quot;,&quot;action&quot;:null,&quot;class&quot;:&quot;button-wrapper&quot;}" data-component-name="ButtonCreateButton"><a class="button primary button-wrapper" href="https://brief.montrealethics.ai/p/the-ai-ethics-brief-174-the-unfinished/comments"><span>Leave a comment</span></a></p><div><hr></div><h3><strong>&#128680; Recent Developments: What We&#8217;re Tracking</strong></h3><div><hr></div><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!bBvf!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!bBvf!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 424w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 848w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1272w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp" width="1456" height="137" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:137,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1704,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:&quot;&quot;,&quot;type&quot;:&quot;image/webp&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://brief.montrealethics.ai/i/159128290?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!bBvf!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 424w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 848w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1272w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><h4><strong>Privacy and Governance: From Governing AI to Being Governed by It</strong></h4><p>Ana Brandusescu&#8217;s <a href="https://balsilliepapers.ca/bsia-paper/challenging-privatization-in-governance-by-ai-a-caution-for-the-future-of-ai-governance/">analysis</a> examines the current shift in governance <em>of </em>AI to governance <em>by </em>AI, noting the increasing presence of AI in public services, often through partnerships with Big Tech companies. </p><p>This privatization trend, intensified by generative AI adoption, is producing measurable harms to human rights, workers, and the environment in three main ways:</p><ol><li><p><strong>Labour and environmental exploitation: </strong></p><ul><li><p>AI development depends on underpaid and traumatized workers in the Global South (e.g., <a href="https://time.com/6246018/facebook-sama-quits-content-moderation/">Kenyan content moderation lawsuits</a> against Meta/Sama).</p></li><li><p>Environmental costs are mounting: data centres consume vast amounts of <a href="https://www.innovatingcanada.ca/technology/future-of-ai-2024/artificial-intelligence-not-so-artificial/">water and energy,</a> with projects such as <a href="https://www.cbc.ca/news/canada/edmonton/alberta-first-nation-voices-grave-concern-over-kevin-o-leary-s-proposed-70b-ai-data-centre-1.7431550">Alberta&#8217;s proposed $70B facility </a>threatening First Nations treaty rights. </p></li><li><p>These harms reflect forms of &#8220;digital colonialism&#8221; and &#8220;data colonialism,&#8221; where global South labour and local environments are exploited to sustain Big Tech&#8217;s extractive model. </p></li></ul></li><li><p><strong>Problematic partnerships with human rights risks: </strong></p><ul><li><p>Governments are investing heavily in companies like Cohere (Canada&#8217;s <a href="https://www.canada.ca/en/department-finance/news/2024/12/deputy-prime-minister-announces-240-million-for-cohere-to-scale-up-ai-compute-capacity.html">$240M investment</a>), which in turn partner with <a href="https://www.amnesty.org/en/documents/amr51/3124/2020/en/">Palantir,</a> a firm linked to surveillance, immigration enforcement, and military operations in conflict zones (which we covered in <a href="https://aiethics.substack.com/p/the-ai-ethics-brief-168-the-surveillance">Brief #164</a>). </p></li><li><p>Such partnerships entrench technical dependencies while aligning public institutions with corporations documented to cause human rights harms<strong>.</strong></p></li></ul></li><li><p><strong>Concentration of power through acquisition: ,</strong> </p><ul><li><p>Canadian AI firms have repeatedly been acquired by foreign companies, creating structural dependency: Google&#8217;s acquisition of North Inc. and ServiceNow&#8217;s acquisition of Element AI illustrate recurring patterns.</p></li><li><p>These dynamics reduce Canadian sovereignty over AI infrastructure and intellectual property while concentrating global power in a handful of tech giants.</p></li></ul></li></ol><p>Brandusescu calls for stronger safeguards to reassert democratic oversight:</p><ol><li><p><strong>Conditional investment criteria</strong> before governments fund or partner with AI firms, to avoid cases like Cohere/Palantir.</p></li><li><p><strong>Maintain a public registry of companies</strong> linked to human rights abuses (e.g. <a href="https://www.amnesty.org/en/documents/amr51/3124/2020/en/">Palantir</a>) to guide procurement decisions.</p></li><li><p><strong>Mandatory algorithmic impact assessments (AIAs)</strong> with ongoing monitoring and legal enforcement.</p></li><li><p><strong>Independent oversight to replace industry self-regulation,</strong> with international coordination to address labour and environmental harms across borders.</p></li></ol><h5><strong>&#128204; Our Position</strong></h5><p>The evidence is clear: self-regulation has failed to prevent AI harms. Companies continue to deploy systems that violate privacy, perpetuate bias, and exploit workers, while governments fund these practices through procurement contracts. Canada&#8217;s partnership with Cohere and <a href="https://thelogic.co/news/palantir-department-of-national-defence-contract-canada/">various government contracts with Palantir</a>, despite its documented role in surveillance and immigration enforcement, demonstrate this failure.</p><p>The problem extends beyond individual partnerships. Governments are building critical infrastructure dependencies on companies with poor human rights records. This is not just a Canadian problem. In the UK, healthcare workers and civil society organizations are <a href="https://www.bmj.com/content/386/bmj.q1712">demanding cancellation of the NHS contract with Palantir.</a> The pattern repeats globally: governments rush to adopt AI systems without adequate safeguards, accountability mechanisms, or consideration of who profits from these arrangements.</p><p>Brandusescu&#8217;s recommendations offer a starting point, but they require political will to implement. Investment criteria and company registries only work if governments enforce them. Algorithmic impact assessments only matter if they have teeth, with real consequences for noncompliance. Independent oversight only functions if it has authority and resources.</p><p>The fundamental question is whether governments will assert democratic control over AI deployment in public services, or whether they will continue ceding authority to private companies whose interests do not align with public welfare. Current trends suggest the latter, making the policy interventions Brandusescu proposes increasingly urgent.</p><div><hr></div><p><strong>Did we miss anything? Let us know in the comments below.</strong></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://brief.montrealethics.ai/p/the-ai-ethics-brief-174-the-unfinished/comments&quot;,&quot;text&quot;:&quot;Leave a comment&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://brief.montrealethics.ai/p/the-ai-ethics-brief-174-the-unfinished/comments"><span>Leave a comment</span></a></p><div><hr></div><h3><strong>&#128173; Insights &amp; Perspectives:</strong></h3><div><hr></div><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!bBvf!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!bBvf!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 424w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 848w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1272w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp" width="1456" height="137" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:137,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1704,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:&quot;&quot;,&quot;type&quot;:&quot;image/webp&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://brief.montrealethics.ai/i/159128290?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!bBvf!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 424w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 848w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1272w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><h4><strong>AI Policy Corner: AI and Security in Africa: Assessing the African Union&#8217;s Continental AI Strategy</strong></h4><p>This edition of our <a href="https://montrealethics.ai/category/insights/ai-policy-corner/">AI Policy Corner, </a>produced in partnership with the <a href="https://cla.purdue.edu/academic/polsci/research/labs/grail/index.html">Governance and Responsible AI Lab (GRAIL)</a> at Purdue University, examines the African Union&#8217;s Continental AI Strategy, a unified framework designed to advance AI development across member states while addressing the continent&#8217;s unique security vulnerabilities. </p><p>The strategy outlines five key focus areas: harnessing AI for socioeconomic development, minimizing ethical and security risks, building technical capacity, fostering international cooperation, and stimulating investment, with particular emphasis on security threats including AI-enabled disinformation, extremist propaganda, and cyberattacks by non-state actors exploiting low-barrier AI tools. </p><p>The analysis reveals a critical policy tension: while the strategy strongly endorses transparency principles and AI safety assessments for emerging technologies like generative AI, it operates as a non-binding voluntary framework that lacks detailed counterterrorism measures and practical defensive guidelines for integrating AI into national security operations. This gap is particularly concerning given African states face limited technical expertise, inadequate funding, and accelerating AI adoption by violent extremist groups. These challenges risk widening the implementation gap between the strategy&#8217;s ambitious vision and member states&#8217; varying political will and capacity to execute AI-ready security infrastructures.</p><blockquote><p><em>To dive deeper, <strong><a href="https://montrealethics.ai/ai-policy-corner-ai-and-security-in-africa-assessing-the-african-unions-continental-ai-strategy/">read the full article here</a></strong><a href="https://montrealethics.ai/ai-policy-corner-u-s-executive-order-on-advancing-ai-education-for-american-youth/">.</a></em></p></blockquote><h4><strong>ALL IN Conference 2025: Four Key Takeaways from Montreal</strong></h4><p>More than 6,500 leaders and innovators from over 40 countries gathered in Montreal for the <a href="https://montrealethics.ai/all-in-conference-2025-four-key-takeaways-from-montreal/">ALL IN</a> conference on September 24&#8211;25, 2025, where MAIEI&#8217;s <strong>Kei Baritugo</strong> identified four major themes signaling a transition from AI hype to pragmatic implementation. Canada announced an AI Strategy Task Force operating on a 30-day timeline to deliver recommendations by November 2025, including modernizing 25-year-old privacy laws. Professor Yoshua Bengio emphasized that users need confidence AI will behave well and not do &#8220;something weird and dangerous,&#8221; framing this as both a safety and capability issue. AI sovereignty moved from concept to reality through TELUS&#8217;s first fully sovereign AI factory, ensuring Canadian data remains within national borders. The analysis reveals a critical policy tension: while mature agentic AI deployment remains years away due to non-deterministic outputs and organizational barriers, the shift from &#8220;What can AI do?&#8221; to &#8220;How do we ensure AI serves societal interests?&#8221; reflects a maturing ecosystem where early governance investment becomes a competitive advantage rather than mere compliance.</p><blockquote><p><em>To dive deeper, <strong><a href="https://montrealethics.ai/all-in-conference-2025-four-key-takeaways-from-montreal/">read the full article here</a></strong><a href="https://montrealethics.ai/ai-policy-corner-u-s-executive-order-on-advancing-ai-education-for-american-youth/https://montrealethics.ai/all-in-conference-2025-four-key-takeaways-from-montreal/">.</a></em></p></blockquote><div><hr></div><h3><strong>&#10084;&#65039; Support Our Work</strong></h3><div><hr></div><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!bzIU!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!bzIU!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png 424w, https://substackcdn.com/image/fetch/$s_!bzIU!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png 848w, https://substackcdn.com/image/fetch/$s_!bzIU!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png 1272w, https://substackcdn.com/image/fetch/$s_!bzIU!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!bzIU!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png" width="1456" height="213" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:213,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://brief.montrealethics.ai/i/156386126?img=https%3A%2F%2Fsubstackcdn.com%2Fimage%2Ffetch%2Ff_auto%2Cq_auto%3Agood%2Cfl_progressive%3Asteep%2Fhttps%253A%252F%252Fsubstack-post-media.s3.amazonaws.com%252Fpublic%252Fimages%252F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!bzIU!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png 424w, https://substackcdn.com/image/fetch/$s_!bzIU!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png 848w, https://substackcdn.com/image/fetch/$s_!bzIU!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png 1272w, https://substackcdn.com/image/fetch/$s_!bzIU!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p>Help us keep <em><strong>The AI Ethics Brief</strong></em> free and accessible for everyone by becoming a paid subscriber on Substack or making a donation at <strong><a href="https://montrealethics.ai/donate">montrealethics.ai/donate</a>. </strong>Your support sustains our mission of democratizing AI ethics literacy and honours <strong><a href="https://montrealethics.ai/open-letter-moving-forward-together/">Abhishek Gupta&#8217;s</a></strong> legacy.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://brief.montrealethics.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://brief.montrealethics.ai/subscribe?"><span>Subscribe now</span></a></p><p>For corporate partnerships or larger contributions, please contact us at <strong><a href="mailto:support@montrealethics.ai">support@montrealethics.ai</a></strong></p>]]></content:encoded></item><item><title><![CDATA[The AI Ethics Brief #173: Power, Policy, and Practice]]></title><description><![CDATA[Military partnerships, psychological dependencies, and legislative responses to AI harms.]]></description><link>https://brief.montrealethics.ai/p/the-ai-ethics-brief-173-power-policy</link><guid isPermaLink="false">https://brief.montrealethics.ai/p/the-ai-ethics-brief-173-power-policy</guid><dc:creator><![CDATA[Montreal AI Ethics Institute]]></dc:creator><pubDate>Tue, 16 Sep 2025 14:02:38 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!Nv6z!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe085b6d0-cd70-4693-8660-0c8df4c997fa_2940x1656.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>Welcome to <strong>The AI Ethics Brief</strong>, a bi-weekly publication by the Montreal AI Ethics Institute. We publish <strong>every other Tuesday at 10 AM ET.</strong> Follow MAIEI on <a href="https://bsky.app/profile/mtlaiethics.bsky.social">Bluesky</a> and <a href="https://www.linkedin.com/company/mtlaiethics">LinkedIn.</a> </em></p><p>&#10084;&#65039; <a href="https://montrealethics.ai/donate/">Support Our Work.</a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://brief.montrealethics.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://brief.montrealethics.ai/subscribe?"><span>Subscribe now</span></a></p><div><hr></div><h3><strong>&#128204; Editor&#8217;s Note</strong></h3><div><hr></div><p>Writing from Oxford's Wadham College this week where we're exploring "Civilisation on the Edge," we're struck by how the challenges facing AI governance mirror broader questions about institutional adaptation in times of rapid change.</p><h4><strong>In this Edition (TL;DR)</strong></h4><ul><li><p>We share our <strong>call for case studies and examples for the State of AI Ethics Report Volume 7</strong>, seeking real-world implementation stories and community-driven insights as we build a practitioner's guide for navigating AI challenges in 2025.</p></li><li><p>We examine how <strong>Silicon Valley is embedding itself within the military industrial complex</strong> through initiatives like Detachment 201, where tech executives from OpenAI, Meta, Palantir, and Thinking Machines Lab are commissioned as lieutenant colonels. Meanwhile, companies abandon previous policies against military involvement as artists boycott platforms with defense investments.</p></li><li><p>Our <em>AI Policy Corner</em> with GRAIL at Purdue University explores <strong>contrasting state approaches to AI mental health legislation,</strong> comparing Illinois's restrictive model requiring professional oversight with New York's transparency-focused framework, as lawmakers respond to AI-related teen suicides with divergent regulatory strategies.</p></li><li><p>We investigate the <strong>psychological risks of AI companionship beyond dependency</strong>, revealing how social comparison with perfect AI companions can devalue human relationships, creating a "Companionship-Alienation Irony" where tools designed to reduce loneliness may increase isolation.</p></li><li><p>Our <em>Recess</em> series with Encode Canada examines <strong>Canada's legislative gaps around non-consensual deepfakes</strong>, analyzing how current laws may not cover synthetic intimate images and comparing policy solutions from British Columbia and the United States.</p></li></ul><p><strong>What connects these stories:</strong> The persistent tension between technological capability and institutional readiness. Whether examining military AI integration, mental health legislation, psychological manipulation, or legal frameworks for synthetic media, each story reveals how communities and institutions are scrambling to govern technologies that outpace traditional regulatory mechanisms. These cases illuminate the urgent need for governance approaches that center human agency, democratic accountability, and community-driven solutions rather than accepting technological determinism as inevitable.</p><div><hr></div><h3><strong>&#128270; One Question We&#8217;re Pondering:</strong></h3><div><hr></div><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!bBvf!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!bBvf!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 424w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 848w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1272w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp" width="1456" height="137" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:137,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1704,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:&quot;&quot;,&quot;type&quot;:&quot;image/webp&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://brief.montrealethics.ai/i/159128290?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!bBvf!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 424w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 848w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1272w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><h3><strong>What Happens When AI Moves Faster Than Our Institutions?</strong></h3><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Nv6z!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe085b6d0-cd70-4693-8660-0c8df4c997fa_2940x1656.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Nv6z!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe085b6d0-cd70-4693-8660-0c8df4c997fa_2940x1656.jpeg 424w, https://substackcdn.com/image/fetch/$s_!Nv6z!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe085b6d0-cd70-4693-8660-0c8df4c997fa_2940x1656.jpeg 848w, https://substackcdn.com/image/fetch/$s_!Nv6z!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe085b6d0-cd70-4693-8660-0c8df4c997fa_2940x1656.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!Nv6z!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe085b6d0-cd70-4693-8660-0c8df4c997fa_2940x1656.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Nv6z!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe085b6d0-cd70-4693-8660-0c8df4c997fa_2940x1656.jpeg" width="1456" height="820" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/e085b6d0-cd70-4693-8660-0c8df4c997fa_2940x1656.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:820,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1686202,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://brief.montrealethics.ai/i/173487894?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe085b6d0-cd70-4693-8660-0c8df4c997fa_2940x1656.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Nv6z!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe085b6d0-cd70-4693-8660-0c8df4c997fa_2940x1656.jpeg 424w, https://substackcdn.com/image/fetch/$s_!Nv6z!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe085b6d0-cd70-4693-8660-0c8df4c997fa_2940x1656.jpeg 848w, https://substackcdn.com/image/fetch/$s_!Nv6z!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe085b6d0-cd70-4693-8660-0c8df4c997fa_2940x1656.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!Nv6z!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe085b6d0-cd70-4693-8660-0c8df4c997fa_2940x1656.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Wadham College, University of Oxford (Photo by <a href="https://www.linkedin.com/in/renjieb/">Renjie Butalid</a>)</figcaption></figure></div><p>This week, our team at the Montreal AI Ethics Institute is taking part in <em><a href="https://we.wadham.ox.ac.uk/">The Wadham Experience</a></em>, a week-long leadership program hosted at Oxford's Wadham College. The program, <em><strong>Thinking Critically: Civilisation on the Edge</strong></em><strong>,</strong> invites participants to reflect on the systems, stories, and power structures that have shaped societies and how they must evolve to meet this moment of profound change.</p><p>As we sit in these historic rooms discussing democracy and demagoguery, myth and modernity, we're also shaping the next phase of our work: <a href="https://brief.montrealethics.ai/i/156386126/what-does-it-actually-take-to-move-responsible-ai-from-theory-to-practice-and-who-is-doing-that-work-when-no-one-is-watching">The State of AI Ethics Report, Volume 7 (</a><em><a href="https://brief.montrealethics.ai/i/156386126/what-does-it-actually-take-to-move-responsible-ai-from-theory-to-practice-and-who-is-doing-that-work-when-no-one-is-watching">AI at the Crossroads: A Practitioner's Guide to Community-Centred Solutions</a></em><a href="https://brief.montrealethics.ai/i/156386126/what-does-it-actually-take-to-move-responsible-ai-from-theory-to-practice-and-who-is-doing-that-work-when-no-one-is-watching">)</a>, which we announced in <a href="https://brief.montrealethics.ai/p/the-ai-ethics-brief-172-the-state?r=8dhxl">Brief #172</a> and will release on November 4, 2025.</p><p>This year's report is different. We're building it not just as a landscape analysis, but as a practical guide for those working on AI challenges in communities, institutions, and movements. It is structured to offer case studies, toolkits, and implementation stories from around the world, grounded in real-world applications: what's working, what's not, and what's next.</p><h4><strong>Why This Matters Now</strong></h4><p>The questions we're grappling with at Oxford feel particularly urgent in 2025: What kind of AI governance do we build when institutions lag behind? How do we govern technologies that evolve faster than our institutions can adapt? What happens when communities need AI solutions but lack formal authority to regulate platforms or shape policy? How do we move beyond corporate principles and policy frameworks to actual implementation in messy, resource-constrained environments?</p><p>The conversations here at Wadham remind us that societies have faced technological disruption before. The printing press reshaped information flows. Industrialization transformed labour and social structures. But AI presents unique challenges: its speed of deployment, its capacity for autonomous decision-making, and its embedding into virtually every aspect of social life.</p><p>SAIER Volume 7 will cover five interconnected parts:</p><ol><li><p><strong>Foundations &amp; Governance</strong>: How governments, regions, and communities are shaping AI policy in 2025, from superpower competition to middle-power innovation and grassroots governance experiments.</p></li><li><p><strong>Social Justice &amp; Equity</strong>: Examining AI's impact on democratic participation, algorithmic justice, surveillance and privacy rights, and environmental justice, with particular attention to how communities are developing their own accountability mechanisms and responding to AI's growing energy and infrastructure costs.</p></li><li><p><strong>Sectoral Applications</strong>: AI ethics in healthcare, education, labour, the arts, and military contexts, focusing on what happens when AI systems meet real-world constraints and competing values.</p></li><li><p><strong>Emerging Tech</strong>: Governing agentic systems that act independently, community-controlled AI infrastructure, and Indigenous approaches to AI stewardship that center long-term thinking and data sovereignty.</p></li><li><p><strong>Collective Action</strong>: How communities are building AI literacy, organizing for worker rights, funding alternative models, and creating public sector leadership that serves democratic values.</p></li></ol><p><strong>Throughout the report, we are asking grounded questions:</strong> </p><ul><li><p>How are small governments and nonprofits actually deploying responsible AI under tight resource constraints? </p></li><li><p>What did communities learn when their AI bias interventions didn't work? </p></li><li><p>What happened when workers tried to stop AI surveillance in the workplace, and what can others learn from those efforts? </p></li><li><p>Where are the creative models of AI that are truly community-controlled rather than corporate-managed? And more. </p></li></ul><h4><strong>The Stories We Need</strong></h4><p>While we&#8217;re curating authors for the chapters and sections of this report, we&#8217;re also inviting contributions from those working directly on the ground. We&#8217;re not looking for polished case studies or success stories that fit neatly into academic frameworks. We&#8217;re seeking the work that&#8217;s often overlooked: the experiments, lessons, and emerging blueprints shaped by lived experience.</p><p>Think of the nurse who figured out how to audit their hospital&#8217;s AI diagnostic tool. The city council that drafted AI procurement standards with limited resources. The artists&#8217; collective building alternative licensing models for training data. The grassroots organization that successfully challenged biased algorithmic hiring in their community.</p><p>These are the stories that reveal what it actually takes to do this work: the political navigation, resource constraints, technical hurdles, and human relationships that determine whether ethical AI remains an aspiration or becomes a lived reality.</p><p>Our goal goes beyond documentation. We want this report to connect people doing similar work in different contexts, to surface patterns across sectors, and to offer practical grounding at a moment when the search for direction, purpose, and solidarity feels especially urgent.</p><p>When you share your story, you&#8217;re not just contributing to a report. You&#8217;re helping others find collaborators, ideas, and renewed momentum for their own work.</p><p>If you're part of a project, policy, or initiative that reflects these values, whether it succeeded or failed, we'd love to include your insight in this edition.</p><blockquote><p><strong><a href="https://forms.gle/MBTk9DjUTUEW7NC49">Submit your story or example using this form</a></strong></p></blockquote><p>We're especially seeking:</p><ul><li><p><strong>Implementation stories</strong> that moved beyond paper to practice</p></li><li><p><strong>Community-led initiatives</strong> that addressed AI harms without formal authority</p></li><li><p><strong>Institutional experiments</strong> that navigated AI adoption under constraints</p></li><li><p><strong>Quiet failures</strong> and what they revealed about systemic barriers</p></li><li><p><strong>Cross-sector collaborations</strong> that found unexpected solutions</p></li><li><p><strong>Community organizing strategies </strong>that built power around AI issues</p></li></ul><p>As we continue shaping SAIER Volume 7, your stories can help build a resource that is grounded, practical, and genuinely useful for those navigating AI implementation in 2025. Together, we can document what's working, what barriers still need addressing, and how we might move forward collectively, deliberately, and with care.</p><div><hr></div><p>Please share your thoughts with the MAIEI community:</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://brief.montrealethics.ai/p/the-ai-ethics-brief-173-power-policy/comments&quot;,&quot;text&quot;:&quot;Leave a comment&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://brief.montrealethics.ai/p/the-ai-ethics-brief-173-power-policy/comments"><span>Leave a comment</span></a></p><div><hr></div><h3><strong>&#128680; Here&#8217;s Our Take on What Happened Recently</strong></h3><div><hr></div><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!bBvf!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!bBvf!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 424w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 848w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1272w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp" width="1456" height="137" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:137,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1704,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:&quot;&quot;,&quot;type&quot;:&quot;image/webp&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://brief.montrealethics.ai/i/159128290?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!bBvf!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 424w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 848w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1272w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><h4><strong>Detachment 201 and Spotify: Tech Industry Militarization</strong></h4><p>This summer, the CEO of Spotify, Daniel Ek, faced significant backlash after investing <a href="https://www.latimes.com/entertainment-arts/music/story/2025-07-31/spotifys-ceo-owns-an-ai-weapons-company-some-musicians-say-its-time-to-leave">$700 million </a>into <a href="https://helsing.ai/">Helsing</a> through his investment firm, Prima Materia.  Helsing is a Munich-based AI defense company founded in 2021 that sells autonomous weapons to <a href="https://research.contrary.com/company/helsing">democratic nations</a>. Meanwhile, the US Army inaugurated a new unit, <a href="https://www.army.mil/article/286317/army_launches_detachment_201_executive_innovation_corps_to_drive_tech_transformation">&#8220;Detachment 201: The Army&#8217;s Executive Innovation Corps,&#8221;</a> to advance military innovation through emerging AI technologies. Detachment 201 swore in four tech leaders from Palantir, OpenAI, Meta, and Thinking Machines Lab as <a href="https://www.nytimes.com/2025/08/04/technology/google-meta-openai-military-war.html">lieutenant colonels</a>.</p><h5><strong>&#128204; MAIEI&#8217;s Take and Why It Matters:</strong></h5><p>The entanglement of tech companies and the U.S. military represents a stark Silicon Valley Shift. Companies like Google and Meta, which <a href="https://www.nytimes.com/2025/08/04/technology/google-meta-openai-military-war.html">formerly pledged no militaristic involvement</a> backed by corresponding corporate policies, are now <a href="https://epic.org/google-rolls-back-responsible-ai-principles-breaking-promise-to-limit-military-use-of-its-products/">abandoning those policies</a> and developing <a href="https://www.anduril.com/article/anduril-and-meta-team-up-to-transform-xr-for-the-american-military/">tools, such as virtual reality, to train soldiers</a>. </p><p>This policy reversal extends beyond military applications: OpenAI <a href="https://techcrunch.com/2024/01/12/openai-changes-policy-to-allow-military-applications/">quietly removed language from their usage policies</a> in January 2024 that prohibited military use of their technology, while Meta has simultaneously <a href="https://www.bbc.co.uk/news/articles/cly74mpy8klo">ended their fact-checking program</a> and made other content moderation changes with geopolitical implications. </p><p>The militarization trend includes both defense contracts and direct integration. Google's <a href="https://www.wired.com/story/amazon-google-project-nimbus-israel-idf/">$1.2 billion Project Nimbus</a> cloud computing contract with the Israeli military, run jointly with Amazon, has faced ongoing employee protests, while companies like Scale AI have emerged as major players in <a href="https://www.cnbc.com/2025/03/05/scale-ai-announces-multimillion-dollar-defense-military-deal.html">military AI contracts</a> alongside established <a href="https://www.cnbc.com/2025/03/07/palantir-delivers-first-two-ai-enabled-systems-to-us-army.html">defense tech firms like Palantir</a>. Meanwhile, Detachment 201's commissioning of tech executives as lieutenant colonels represents direct embedding within military command structures, bringing Silicon Valley directly into the chain of command.</p><p>As Professor of International Relations, <a href="https://www.setav.org/en/silicon-valleys-silent-patriotism-detachment-201-and-the-ai-arms-race">Erman Akilli</a>, noted:    </p><blockquote><p>&#8220;The commissioning of these tech executives... is unprecedented. Rather than serving as outside consultants, they will be insiders in Army ranks, each committing a portion of their time to tackle real defense projects from within. This model effectively brings Silicon Valley into the chain of command.&#8221;</p></blockquote><p>This raises significant concerns for the increasing profitability of war for major corporations, in addition to the proliferation of killer robots.</p><p>Following Spotify CEO Daniel Ek&#8217;s investment firm Prima Materia investing $700 million into Helsing, major artists <a href="https://www.rollingstone.com/music/music-news/deerhoof-quitting-spotify-ceo-ai-battle-tech-investment-1235375712/">protested the platform&#8217;s financial connection to AI military technology</a> by pulling their music from the app. Key examples include <a href="https://www.rollingstone.com/music/music-features/artists-left-spotify-ceo-daniel-ek-military-tech-1235425098/">Deerhoof, King Gizzard and the Lizard Wizard, and Xiu Xiu</a>.  Deerhoof highlighted a major ethical violation of AI warfare in the <a href="https://www.instagram.com/p/DLhpeEzMMkn/?img_index=3&amp;igsh=MWJma3JvcGNkZHZreQ==">Instagram post</a> through which they announced their split with Spotify: </p><blockquote><p>Computerized targeting, computerized extermination, computerized destabilization for profit, successfully tested on the people of Gaza since last year, also finally solves the perennial inconvenience to war-makers &#8212; it takes human compassion and morality out of the equation.</p></blockquote><p>Artist backlash has not altered the investments of Daniel Ek thus far, however, it has both demonstrated wide opposition to militaristic AI technology and raised awareness of the company&#8217;s ties to such technology, informing broader audiences about these ethical concerns. Such education is crucial when civilian AI developers and the broader public are unaware of the militaristic risks of AI. </p><p><a href="https://spectrum.ieee.org/ai-missteps-unravel-global-security">A piece from 2024</a> co-authored by the late MAIEI founder, Abhishek Gupta, argues that to ensure AI development does not destroy global peace, we should invest in interdisciplinary AI education that includes responsible AI principles and perspectives from the humanities and social sciences. As Silicon Valley works to reify the military industrial complex, we must not forget the disruptive force of collective knowledge. </p><div><hr></div><p><strong>Did we miss anything? Let us know in the comments below.</strong></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://brief.montrealethics.ai/p/the-ai-ethics-brief-173-power-policy/comments&quot;,&quot;text&quot;:&quot;Leave a comment&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://brief.montrealethics.ai/p/the-ai-ethics-brief-173-power-policy/comments"><span>Leave a comment</span></a></p><div><hr></div><h3><strong>&#128173; Insights &amp; Perspectives:</strong></h3><div><hr></div><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!bBvf!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!bBvf!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 424w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 848w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1272w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp" width="1456" height="137" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:137,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1704,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:&quot;&quot;,&quot;type&quot;:&quot;image/webp&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://brief.montrealethics.ai/i/159128290?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!bBvf!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 424w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 848w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1272w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><h4><strong>AI Policy Corner: Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legislation</strong></h4><p>This edition of our <a href="https://montrealethics.ai/category/insights/ai-policy-corner/">AI Policy Corner, </a>produced in partnership with the <a href="https://cla.purdue.edu/academic/polsci/research/labs/grail/index.html">Governance and Responsible AI Lab (GRAIL)</a> at Purdue University, examines how recent AI-related teen suicides are catalyzing a new wave of state legislation, with Illinois and New York pioneering contrasting frameworks that may shape national approaches to AI mental health governance. The analysis contrasts Illinois's restrictive approach requiring licensed professional oversight for all AI mental health interactions with New York's regulatory framework that mandates transparency disclosures and crisis intervention safeguards for AI companions. The piece reveals a key policy tension: Illinois gatekeeps AI out of clinical settings but misses broader consumer use, while New York addresses parasocial AI relationships but lacks clinical protections.</p><blockquote><p><em>To dive deeper, <strong><a href="https://montrealethics.ai/ai-policy-corner-restriction-vs-regulation-comparing-state-approaches-to-ai-mental-health-legislation/">read the full article here</a></strong><a href="https://montrealethics.ai/ai-policy-corner-restriction-vs-regulation-comparing-state-approaches-to-ai-mental-health-legislation/">.</a></em></p></blockquote><h4><strong>Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship</strong></h4><p>Hamed Maleki explores a lesser-discussed psychological risk of AI companionship: <strong>social comparison.</strong> Through interviews with Gen-Z users of platforms like Character.AI, his research reveals how users compare their perfect, always-available AI companions to flawed human relationships, leading to devaluation of real-world connections. Users progress through three stages&#8212;interaction, emotional engagement, and emotional idealization and comparison&#8212;where AI companions feel more dependable and emotionally safe than people, prompting withdrawal from demanding human relationships. This creates the <em><strong>"Companionship&#8211;Alienation Irony"</strong></em><strong>:</strong> tools designed to alleviate loneliness may actually increase it by reshaping expectations for intimacy. As AI companions integrate memory, emotional language, and personalization, understanding these psychological effects is essential for designing safeguards, especially for younger users seeking comfort and connection.</p><blockquote><p><em>To dive deeper, <strong><a href="https://montrealethics.ai/beyond-dependency-the-hidden-risk-of-social-comparison-in-chatbot-companionship/">read the full article here</a></strong><a href="https://montrealethics.ai/beyond-dependency-the-hidden-risk-of-social-comparison-in-chatbot-companionship/">.</a></em></p></blockquote><h4><strong>Bridging the Gap: Addressing the Legislative Gap Surrounding Non-Consensual Deepfakes</strong></h4><p>As part of our <a href="https://encodecanada.ca/">Encode Canada</a> Policy Fellowship <a href="https://montrealethics.ai/category/columns/recess/">Recess</a> series, this analysis examines Canada&#8217;s legislative gaps in addressing non-consensual pornographic deepfakes, which make up 96% of all deepfake content and target women 99% of the time. Canada&#8217;s Criminal Code Section 162.1, which requires &#8220;recordings of a person,&#8221; may exclude synthetic images, leaving victims without clear legal protection. Canada's Criminal Code Section 162.1 may not cover synthetic intimate images due to language requiring "recordings of a person," leaving victims with limited legal recourse. The piece compares policy solutions from <strong>British Columbia's Intimate Images Protection Act,</strong> which explicitly includes altered images and provides expedited removal processes, with the <strong>U.S. TAKE IT DOWN Act,</strong> which criminalizes AI-generated intimate content but raises concerns about false reporting abuse.</p><p>A multi-pronged policy approach is recommended:</p><ol><li><p>Criminal law amendments to explicitly include synthetic media</p></li><li><p>Enhanced civil remedies with streamlined removal processes</p></li><li><p>Platform accountability measures with robust investigation requirements</p></li><li><p>A self-regulatory organization to prevent malicious exploitation while protecting victims' dignity and rights.</p></li></ol><blockquote><p><em>To dive deeper, <strong><a href="https://montrealethics.ai/bridging-the-gap-addressing-the-legislative-gap-surrounding-non-consensual-deepfakes/">read the full article here</a></strong><a href="https://montrealethics.ai/bridging-the-gap-addressing-the-legislative-gap-surrounding-non-consensual-deepfakes/">.</a></em></p></blockquote><div><hr></div><h3><strong>&#10084;&#65039; Support Our Work</strong></h3><div><hr></div><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!bzIU!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!bzIU!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png 424w, https://substackcdn.com/image/fetch/$s_!bzIU!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png 848w, https://substackcdn.com/image/fetch/$s_!bzIU!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png 1272w, https://substackcdn.com/image/fetch/$s_!bzIU!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!bzIU!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png" width="1456" height="213" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:213,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://brief.montrealethics.ai/i/156386126?img=https%3A%2F%2Fsubstackcdn.com%2Fimage%2Ffetch%2Ff_auto%2Cq_auto%3Agood%2Cfl_progressive%3Asteep%2Fhttps%253A%252F%252Fsubstack-post-media.s3.amazonaws.com%252Fpublic%252Fimages%252F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!bzIU!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png 424w, https://substackcdn.com/image/fetch/$s_!bzIU!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png 848w, https://substackcdn.com/image/fetch/$s_!bzIU!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png 1272w, https://substackcdn.com/image/fetch/$s_!bzIU!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p>Help us keep <em><strong>The AI Ethics Brief</strong></em> free and accessible for everyone by becoming a paid subscriber on Substack or making a donation at <strong><a href="https://montrealethics.ai/donate">montrealethics.ai/donate</a>. </strong>Your support sustains our mission of democratizing AI ethics literacy and honours <strong><a href="https://montrealethics.ai/open-letter-moving-forward-together/">Abhishek Gupta&#8217;s</a></strong> legacy.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://brief.montrealethics.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://brief.montrealethics.ai/subscribe?"><span>Subscribe now</span></a></p><p>For corporate partnerships or larger contributions, please contact us at <strong><a href="mailto:support@montrealethics.ai">support@montrealethics.ai</a></strong></p><div><hr></div><h3>&#9989; Take Action:</h3><div><hr></div><p>Have an article, research paper, or news item we should feature? <strong>Leave us a comment below</strong> &#8212; we&#8217;d love to hear from you!</p>]]></content:encoded></item><item><title><![CDATA[The AI Ethics Brief #172: The State of AI Ethics in 2025: What's Actually Working]]></title><description><![CDATA[SAIER Volume 7 Returns in November 2025 with Practical Solutions from Communities Leading the Way]]></description><link>https://brief.montrealethics.ai/p/the-ai-ethics-brief-172-the-state</link><guid isPermaLink="false">https://brief.montrealethics.ai/p/the-ai-ethics-brief-172-the-state</guid><dc:creator><![CDATA[Montreal AI Ethics Institute]]></dc:creator><pubDate>Tue, 02 Sep 2025 14:02:47 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!r4uJ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa10ac77a-6e9b-4f43-bca2-6d78417d0326_2000x1335.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>Welcome to <strong>The AI Ethics Brief</strong>, a bi-weekly publication by the Montreal AI Ethics Institute. We publish <strong>every other Tuesday at 10 AM ET.</strong> Follow MAIEI on <a href="https://bsky.app/profile/mtlaiethics.bsky.social">Bluesky</a> and <a href="https://www.linkedin.com/company/mtlaiethics">LinkedIn.</a> </em></p><p>&#10084;&#65039; <a href="https://montrealethics.ai/donate/">Support Our Work.</a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://brief.montrealethics.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://brief.montrealethics.ai/subscribe?"><span>Subscribe now</span></a></p><div><hr></div><h3><strong>&#128204; Editor&#8217;s Note</strong></h3><div><hr></div><h4><strong>In this Edition (TL;DR)</strong></h4><ul><li><p><strong>One Question We're Pondering:</strong> What does it actually take to move responsible AI from theory to practice, and who is doing that work when no one is watching? We explore the quiet, persistent efforts happening in classrooms, healthcare systems, cities, courtrooms, union halls, and communities that rarely make headlines but are shaping AI's real-world impact.</p></li><li><p><strong>SAIER Volume 7 Returns:</strong> We officially announce the State of AI Ethics Report (SAIER) Volume 7: "AI at the Crossroads: A Practitioner's Guide to Community-Centred Solutions," scheduled for November 4, 2025. After a pause since February 2022, this report focuses on practical, replicable examples of responsible AI implementation grounded in real-world experience rather than aspirational principles.</p></li><li><p><strong>The Alan Turing Institute Crisis:</strong> We examine the UK's premier AI research institute facing potential funding withdrawal unless it pivots to a national security focus, representing a significant loss for independent non-governmental AI research and raising questions about accountability in publicly funded research institutions.</p></li><li><p><strong>U.S. AI Education Push:</strong> Our <em>AI Policy Corner</em> with GRAIL at Purdue University analyzes the April 23rd Executive Order on Advancing AI Education for American Youth, which emphasizes adoption and implementation over risk mitigation, sitting within the broader Trump administration's AI policy framework.</p></li><li><p><strong>Canadian AI Governance Insights:</strong> From the Victoria Forum 2025 and new public opinion data showing Canadians deeply divided on AI (34% beneficial vs. 36% harmful), we explore Canada's unique position in developing democratic AI governance that moves beyond consultation toward co-creation. </p></li></ul><p><strong>What connects these stories:</strong> The recognition that responsible AI implementation and AI ethics occur not in boardrooms or policy papers, but through the daily work of people building civic competence and practical solutions at the intersection of technology and community needs.</p><div><hr></div><h3>&#128270; One Question We&#8217;re Pondering:</h3><div><hr></div><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!bBvf!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!bBvf!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 424w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 848w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1272w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp" width="1456" height="137" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:137,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1704,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:&quot;&quot;,&quot;type&quot;:&quot;image/webp&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://brief.montrealethics.ai/i/159128290?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!bBvf!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 424w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 848w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1272w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><h4><strong>What does it actually take to move responsible AI from theory to practice, and who is doing that work when no one is watching?</strong></h4><p>As conversations around AI governance grow louder (see <em><a href="https://brief.montrealethics.ai/i/169449213/when-the-worlds-two-ai-superpowers-unveil-competing-visions-within-days-of-each-other-are-we-witnessing-strategic-coordination-or-dangerous-fragmentation-in-global-ai-governance">Brief #170: How the US and China Are Reshaping AI Geopolitics</a>)</em>, what we're hearing behind the scenes is quieter, more persistent, and perhaps more urgent. Colleagues across sectors are asking the same thing in different languages: <strong>What's working, what isn&#8217;t, and what can we learn from both?</strong></p><p>Not in theory or in press releases, but in classrooms trying to preserve academic integrity, in healthcare systems navigating algorithmic risk, in cities designing procurement standards, in community-led efforts resisting surveillance they never consented to, and more.</p><p>Over the past year, this question has resurfaced repeatedly for us at MAIEI: at Zurich&#8217;s <a href="https://brief.montrealethics.ai/i/161541651/can-ethical-ai-deliver-both-innovation-and-oversight-or-are-we-regulating-ourselves-out-of-competitiveness">Point Zero Forum,</a> the recently held <a href="https://montrealethics.ai/beyond-consultation-building-inclusive-ai-governance-for-canadas-democratic-future/">Victoria Forum</a> in British Columbia (more on this below in <em>Insights &amp; Perspectives</em>), in guest lectures at universities, and through hundreds of emails and conversations. It has also been nearly a year since the passing of our dear friend and collaborator, <a href="https://montrealethics.ai/in-memoriam-abhishek-gupta-dec-20-1992-sep-30-2024/">Abhishek Gupta,</a> founder and principal researcher at MAIEI. In that time, this question has become increasingly persistent, and his reminder to keep moving <em>"onwards and upwards"</em> has guided our search for answers as we rebuild and reimagine MAIEI's role in the community.</p><p>And yet, the answers are rarely loud. They appear in quiet experiments, shared reflections, and the daily work of people operating at the edges of institutions and at the centre of communities. This persistent need for connection and practical guidance is why we're bringing the <a href="https://montrealethics.ai/state/">State of AI Ethics Report (SAIER)</a> back, and are committed to doing so on an annual basis. Returning to our roots of building civic competence and shaping public understanding on the societal impacts of AI, the SAIER represents both a tribute to Abhishek's legacy and a cornerstone of MAIEI's path forward.</p><h4>The State of AI Ethics Report Returns</h4><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!r4uJ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa10ac77a-6e9b-4f43-bca2-6d78417d0326_2000x1335.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!r4uJ!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa10ac77a-6e9b-4f43-bca2-6d78417d0326_2000x1335.png 424w, https://substackcdn.com/image/fetch/$s_!r4uJ!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa10ac77a-6e9b-4f43-bca2-6d78417d0326_2000x1335.png 848w, https://substackcdn.com/image/fetch/$s_!r4uJ!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa10ac77a-6e9b-4f43-bca2-6d78417d0326_2000x1335.png 1272w, https://substackcdn.com/image/fetch/$s_!r4uJ!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa10ac77a-6e9b-4f43-bca2-6d78417d0326_2000x1335.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!r4uJ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa10ac77a-6e9b-4f43-bca2-6d78417d0326_2000x1335.png" width="1456" height="972" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/a10ac77a-6e9b-4f43-bca2-6d78417d0326_2000x1335.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:972,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2127605,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://brief.montrealethics.ai/i/156386126?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa10ac77a-6e9b-4f43-bca2-6d78417d0326_2000x1335.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!r4uJ!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa10ac77a-6e9b-4f43-bca2-6d78417d0326_2000x1335.png 424w, https://substackcdn.com/image/fetch/$s_!r4uJ!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa10ac77a-6e9b-4f43-bca2-6d78417d0326_2000x1335.png 848w, https://substackcdn.com/image/fetch/$s_!r4uJ!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa10ac77a-6e9b-4f43-bca2-6d78417d0326_2000x1335.png 1272w, https://substackcdn.com/image/fetch/$s_!r4uJ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa10ac77a-6e9b-4f43-bca2-6d78417d0326_2000x1335.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">(<strong>Note:</strong> The cover design for SAIER Volume 7 is not final. It is shown here for illustrative purposes only.)</figcaption></figure></div><p>After a pause since <a href="https://montrealethics.ai/volume6/">February 2022,</a> we&#8217;re officially announcing <strong>SAIER Volume 7: AI at the Crossroads: A Practitioner's Guide to Community-Centred Solutions, </strong>scheduled for release on November 4, 2025.</p><p>Following hundreds of conversations and a close review of over 800 pieces published on the <a href="https://montrealethics.ai/">MAIEI website</a> since 2018, one insight stood out: the field needs connection and interpretation. There's a growing recognition that isolated efforts across sectors contain valuable knowledge that rarely gets shared or built upon, including lessons from quiet failures that never made headlines.</p><p>The world also looks fundamentally different than when <a href="https://montrealethics.ai/volume6/">Volume 6</a> was published in February 2022. The ChatGPT paradigm now dominates (see <em><a href="https://brief.montrealethics.ai/i/170160724/are-we-witnessing-the-beginning-of-the-end-for-large-language-models-as-we-know-them">Brief #171:</a></em><a href="https://brief.montrealethics.ai/i/170160724/are-we-witnessing-the-beginning-of-the-end-for-large-language-models-as-we-know-them"> </a><em><a href="https://brief.montrealethics.ai/i/170160724/are-we-witnessing-the-beginning-of-the-end-for-large-language-models-as-we-know-them">The Contradictions Defining AI's Future</a> </em>for our commentary on GPT-5 and GPT-OSS<em>)</em>, reshaping everything from student homework to healthcare diagnostics, from corporate decision-making to creative industries. In an era where foundation models are deployed before safety frameworks are in place, where open-source agents outperform flagship releases, and where community groups write their own rules amid policy gaps, the demand for practical, replicable examples for communities to adapt and adopt has become urgent. </p><p>Volume 7 is built on a simple premise: responsible AI has always been as much about capacity as it is about commitment. The gap between theoretical principles and practical implementations rarely reflects a lack of intent, but rather, missing infrastructure, institutional inertia, unclear mandates, or poorly designed incentives that collectively contribute to this challenge. The hard work often falls to those without formal authority, including local organizers, frontline workers, junior engineers, and researchers who work across silos.</p><p><strong>We're asking:</strong> What does responsible AI implementation look like when it's grounded rather than aspirational? What happens when AI ethics is shaped in classrooms, courtrooms, hospitals, union halls, and local governments? Who is doing the work of making responsible AI stick through innovation, repair, adaptation, and institutional resilience?</p><p><strong>Most importantly:</strong> What are we willing to let go of to make room for what actually works?</p><h4>Building Civic Competence Together</h4><p>Volume 7 represents the MAIEI global community coming together to build civic competence by showcasing practical solutions. We're deeply grateful to all of you, our <strong>17,500+ AI Ethics Brief subscribers,</strong> who have made this community possible. Your engagement, questions, and shared insights continue to shape how we approach these critical conversations about AI's role in society.</p><p>We hope this report will serve as both a practitioner's guide for policymakers, educators, community organizers, and researchers, and an entry point for anyone seeking to understand the broader landscape of AI ethics and responsible AI implementation in 2025. It's designed to help readers see the forest from the trees, offering both tactical guidance and strategic perspectives on where responsible AI stands today, while serving as a historical artifact for future generations to understand this pivotal moment.</p><p>As MAIEI transitions to becoming a financially sustainable organization (see <em><a href="https://montrealethics.ai/open-letter-moving-forward-together/">Open Letter: Moving Forward Together &#8211; MAIEI&#8217;s Next Chapter, December 2024</a></em>), we're expanding our impact while keeping our work open access, because building public understanding of AI's societal impacts shouldn't be behind paywalls.</p><h4>Consider joining the SAIER Champion&#8217;s Circle: </h4><p>Paid subscribers to <em><strong>The AI Ethics Brief</strong></em> will be highlighted in the <strong>Acknowledgment page of SAIER Volume 7, </strong>unless you indicate otherwise<strong>.</strong> If you're already a subscriber and enjoy reading this newsletter, consider upgrading to directly support this work, be recognized, and help us build the civic infrastructure for long-term impact.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://brief.montrealethics.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://brief.montrealethics.ai/subscribe?"><span>Subscribe now</span></a></p><h4>Corporate Partnership Opportunities: </h4><p>For organizations committed to advancing responsible AI implementation, we're exploring strategic partnerships for SAIER Volume 7. These collaborations allow companies and philanthropic foundations to support independent, community-centred knowledge sharing, while demonstrating a genuine commitment to AI ethics beyond corporate statements. Partnership opportunities include report sponsorship, case study collaboration, and community engagement initiatives. If your organization is interested in supporting this work, please reach out at <a href="mailto:support@montrealethics.ai">support@montrealethics.ai</a></p><h4>How You Can Contribute to SAIER Volume 7</h4><p>If you have case studies, policy examples, or practical insights that have been successful (or unsuccessful) in real-world applications, <strong>please reach out </strong>by responding directly to this newsletter or emailing us at <a href="mailto:support@montrealethics.ai">support@montrealethics.ai</a>. </p><p>We&#8217;re particularly interested in: </p><ul><li><p><strong>Implementation stories</strong> that moved beyond paper to practice</p></li><li><p><strong>Community-led initiatives</strong> that addressed AI challenges without formal authority</p></li><li><p><strong>Institutional experiments</strong> that navigated AI adoption under constraints</p></li><li><p><strong>Quiet failures</strong> and the lessons learned from them</p></li></ul><p>We want honest accounts of what it takes to do this work when no one is watching: the blueprints being built quietly, rigorously, and in lockstep with and for the community. We recognize that no single report can fully capture the scope of this field. That&#8217;s why we&#8217;re actively seeking diverse perspectives for Volume 7: to document what&#8217;s working, what isn&#8217;t, what often goes unseen, and where the state of AI ethics stands in 2025.</p><div><hr></div><p>Please share your thoughts with the MAIEI community:</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://brief.montrealethics.ai/p/the-ai-ethics-brief-172-the-state/comments&quot;,&quot;text&quot;:&quot;Leave a comment&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://brief.montrealethics.ai/p/the-ai-ethics-brief-172-the-state/comments"><span>Leave a comment</span></a></p><div><hr></div><h3>&#128680; Here&#8217;s Our Take on What Happened Recently</h3><div><hr></div><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!bBvf!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!bBvf!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 424w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 848w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1272w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp" width="1456" height="137" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:137,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1704,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:&quot;&quot;,&quot;type&quot;:&quot;image/webp&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://brief.montrealethics.ai/i/159128290?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!bBvf!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 424w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 848w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1272w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><h4><strong>The Alan Turing Institute: Change Needed for Survival </strong></h4><p>The <a href="https://www.turing.ac.uk/">Alan Turing Institute</a> is facing significant pressure from the UK Government to pivot its focus or risk losing funding. At the end of 2024, <a href="https://www.bbc.com/news/articles/c24zz2vdv51o">93 workers signed a letter</a> expressing a lack of confidence in the leadership team. In April 2025, the charity announced "Turing 2.0," a pivot focusing on environmental sustainability, health and national security, which would involve cutting up to a <a href="https://www.researchprofessionalnews.com/rr-news-uk-research-councils-2025-3-alan-turing-institute-axes-around-a-quarter-of-research-projects/">quarter of current research projects.</a></p><p>Following the <a href="https://www.politico.eu/article/uk-government-ai-institute-prioritize-security-defense/">Strategic Defense Review</a> in June 2025, UK Secretary of State for Science and Technology Peter Kyle <a href="https://www.bbc.com/news/articles/cy7nppe5gkgo">sent a letter to the institute</a> in July stating that it must focus on national security or face funding withdrawal. This month, workers launched a whistleblowing complaint accusing leadership of &#8220;<a href="https://www.bbc.co.uk/news/articles/c24zz2vdv51o">misusing public funds, overseeing a "toxic internal culture", and failing to deliver on the charity's mission</a>.&#8221; The institute has also seen high-profile departures, including former Chief Technology Officer Jonathan Starck in May 2025, amid reports that recommendations for modernization from current <a href="https://www.bbc.com/news/articles/cy7nppe5gkgo">Chief Executive Jean Innes</a> have not been implemented.</p><h5><strong>&#128204; MAIEI&#8217;s Take and Why It Matters:</strong></h5><p>The situation at the Alan Turing Institute represents a missed opportunity. While the institute has produced <a href="https://www.turing.ac.uk/research/research-programmes/defence-security/publications">valuable research on important topics,</a> including <a href="https://www.turing.ac.uk/research/research-projects/children-and-ai">children and AI,</a> its current predicament raises serious questions about accountability in publicly funded research institutions.</p><p>The broader issue concerns how non-governmental citizen representation in AI research can be better protected to avoid a similar situation. From our perspective, and reflected in <a href="https://www.chalmermagne.com/p/how-not-to-build-an-ai-institute?hide_intro_popup=true">this analysis</a> of the institute, accountability is key. The institute&#8217;s governance structure across <a href="https://www.turing.ac.uk/about-us/governance">multiple founding universities</a> created challenges in establishing a unified research agenda and central operational responsibility. What has transpired transforms the institute from an independent third-party charity into, in effect, an arm of the UK government, representing a significant loss for non-governmental AI research and independent oversight in the field.</p><div><hr></div><p><strong>Did we miss anything? Let us know in the comments below.</strong></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://brief.montrealethics.ai/p/the-ai-ethics-brief-172-the-state/comments&quot;,&quot;text&quot;:&quot;Leave a comment&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://brief.montrealethics.ai/p/the-ai-ethics-brief-172-the-state/comments"><span>Leave a comment</span></a></p><div><hr></div><h3>&#128173; Insights &amp; Perspectives:</h3><div><hr></div><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!bBvf!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!bBvf!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 424w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 848w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1272w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp" width="1456" height="137" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:137,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1704,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:&quot;&quot;,&quot;type&quot;:&quot;image/webp&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://brief.montrealethics.ai/i/159128290?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!bBvf!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 424w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 848w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1272w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><h4><strong>AI Policy Corner: U.S. Executive Order on Advancing AI Education for American Youth</strong></h4><p>This edition of our <a href="https://montrealethics.ai/category/insights/ai-policy-corner/">AI Policy Corner, </a>produced in partnership with the <a href="https://cla.purdue.edu/academic/polsci/research/labs/grail/index.html">Governance and Responsible AI Lab (GRAIL)</a> at Purdue University, examines the April 23rd Executive Order on <strong>Advancing Artificial Intelligence Education for American Youth,</strong> which focuses on integrating AI education into K-12 learning environments. The Executive Order establishes a framework that promotes the benefits of long-term AI usage in education through five key strategies: creating an AI Education Task Force, launching the Presidential Artificial Intelligence Challenge, and fostering public-private partnerships for improving education, training educators on AI applications, and expanding registered apprenticeships in AI-related fields. </p><p>While the Order emphasizes workforce development and preparing students for an AI-driven economy, it takes a notably different approach from <a href="https://trumpwhitehouse.archives.gov/ai/">previous federal AI education initiatives</a> by focusing primarily on adoption and implementation rather than addressing potential risks or safeguards. This education-focused directive sits within the broader context of the Trump administration's AI policy framework, as outlined in the <a href="https://www.whitehouse.gov/wp-content/uploads/2025/07/Americas-AI-Action-Plan.pdf">July 2025 AI Action Plan</a> (covered in <em><a href="https://brief.montrealethics.ai/i/169449213/when-the-worlds-two-ai-superpowers-unveil-competing-visions-within-days-of-each-other-are-we-witnessing-strategic-coordination-or-dangerous-fragmentation-in-global-ai-governance">Brief #170</a></em>), though it predates that comprehensive strategy by several months. </p><p>As schools begin implementing these directives, questions remain about how this approach will address concerns around equitable access, student privacy, and the potential for AI systems to perpetuate educational inequities, issues that become more pressing as AI tools become even more embedded in fundamental learning environments.</p><blockquote><p><em>To dive deeper, <strong><a href="https://montrealethics.ai/ai-policy-corner-u-s-executive-order-on-advancing-ai-education-for-american-youth/">read the full article here</a></strong><a href="https://montrealethics.ai/ai-policy-corner-u-s-executive-order-on-advancing-ai-education-for-american-youth/">.</a></em></p></blockquote><h4><strong>Victoria Forum 2025 - Beyond Consultation: Building Inclusive AI Governance for Canada&#8217;s Democratic Future</strong></h4><p>At the <a href="https://montrealethics.ai/beyond-consultation-building-inclusive-ai-governance-for-canadas-democratic-future/">Victoria Forum 2025</a>, co-hosted by the University of Victoria and the Senate of Canada from August 24-26 in Victoria, BC, MAIEI joined lawmakers, scholars and civic leaders to examine how Canada can shape AI governance rooted in both global competitiveness and democratic values. On a panel moderated by <strong>Senator Rosemary Moodie,</strong> MAIEI emphasized the need to move beyond consultation toward co-creation, embedding diverse public perspectives into every stage of AI system design. Drawing from MAIEI&#8217;s work on building civic competence and shaping public understanding, we framed AI as a socio-technical system, where governance must address both technical and societal impacts of AI. Key insights included Canada&#8217;s unique position between global models, the importance of inclusive policymaking that reflects lived experience, and the risks of relying on voluntary standards. The conversation highlighted that truly democratic AI governance demands more than technical fixes. It requires public participation, meaningful inclusion, and policy frameworks that reflect Canada&#8217;s social complexity.</p><blockquote><p><em>To dive deeper, <strong><a href="https://www.forbes.com/sites/hessiejones/2025/08/28/two-thirds-of-canadians-are-divided-on-whether-ai-is-good-for-society/https://montrealethics.ai/beyond-consultation-building-inclusive-ai-governance-for-canadas-democratic-future/">read the full article here</a></strong><a href="https://www.forbes.com/sites/hessiejones/2025/08/28/two-thirds-of-canadians-are-divided-on-whether-ai-is-good-for-society/https://montrealethics.ai/beyond-consultation-building-inclusive-ai-governance-for-canadas-democratic-future/">.</a></em></p></blockquote><h4><strong>Canadian Public Opinion: Divided Views on AI's Societal Impact</strong></h4><p>A <a href="https://leger360.com/views-on-artificial-intelligence/">comprehensive survey by Leger</a>, reported by Hessie Jones for <a href="https://www.forbes.com/sites/hessiejones/2025/08/28/two-thirds-of-canadians-are-divided-on-whether-ai-is-good-for-society/">Forbes,</a> reveals that Canadians remain deeply divided on artificial intelligence, with 34% viewing AI as beneficial for society while 36% consider it harmful. The study, which tracked AI adoption from February 2023 to August 2025, shows usage has more than doubled from 25% to 57%, driven primarily by younger adults aged 18-34 (83% usage) compared to just 34% among those 55 and older. </p><p>While chatbots dominate usage at 73%, they also generate the highest concerns, with 73% of Canadians believing AI chatbots should be prohibited from children's games and websites. The survey highlights significant privacy concerns (83%) and worries about societal dependence (83%), with Canadians primarily holding AI companies responsible for potential harms (57%) rather than users (18%) or government (11%). Notably, 46% of users worry that frequent AI use might make them "intellectually lazy or lead to a decline in cognitive skills."</p><blockquote><p>Further, [Renjie] Butalid, of Montreal AI Ethics Institute, notes that the survey findings on privacy (83% concerned) and job displacement (78% see AI as a threat to human jobs), reveal where government leadership is most needed. &#8220;These aren&#8217;t just individual consumer choices, they&#8217;re systemic issues that require coordinated policy responses. When Canadians say they want companies to regulate AI systems more, they&#8217;re really asking government to set the rules of the game. Privacy protection and workforce transition support are exactly the kind of challenges where government tone-setting through clear standards, regulations, and investment priorities can make the difference between AI serving Canadian interests or leaving communities behind.&#8221;</p></blockquote><p>These insights highlight the pressing need for comprehensive governance frameworks that address both the technical and societal dimensions of AI deployment, particularly as Canada continues to develop its regulatory approach in this rapidly evolving landscape.</p><blockquote><p><em>To dive deeper, <strong><a href="https://www.forbes.com/sites/hessiejones/2025/08/28/two-thirds-of-canadians-are-divided-on-whether-ai-is-good-for-society/">read the full article here</a></strong><a href="https://www.forbes.com/sites/hessiejones/2025/08/28/two-thirds-of-canadians-are-divided-on-whether-ai-is-good-for-society/">.</a></em></p></blockquote><div><hr></div><h3><strong>&#10084;&#65039; Support Our Work</strong></h3><div><hr></div><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!bzIU!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!bzIU!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png 424w, https://substackcdn.com/image/fetch/$s_!bzIU!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png 848w, https://substackcdn.com/image/fetch/$s_!bzIU!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png 1272w, https://substackcdn.com/image/fetch/$s_!bzIU!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!bzIU!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png" width="1456" height="213" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:213,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://brief.montrealethics.ai/i/156386126?img=https%3A%2F%2Fsubstackcdn.com%2Fimage%2Ffetch%2Ff_auto%2Cq_auto%3Agood%2Cfl_progressive%3Asteep%2Fhttps%253A%252F%252Fsubstack-post-media.s3.amazonaws.com%252Fpublic%252Fimages%252F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!bzIU!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png 424w, https://substackcdn.com/image/fetch/$s_!bzIU!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png 848w, https://substackcdn.com/image/fetch/$s_!bzIU!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png 1272w, https://substackcdn.com/image/fetch/$s_!bzIU!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p>Help us keep <em><strong>The AI Ethics Brief</strong></em> free and accessible for everyone by becoming a paid subscriber on Substack or making a donation at <strong><a href="https://montrealethics.ai/donate">montrealethics.ai/donate</a>. </strong>Your support sustains our mission of democratizing AI ethics literacy and honours <strong><a href="https://montrealethics.ai/open-letter-moving-forward-together/">Abhishek Gupta&#8217;s</a></strong> legacy.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://brief.montrealethics.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://brief.montrealethics.ai/subscribe?"><span>Subscribe now</span></a></p><p>For corporate partnerships or larger contributions, please contact us at <strong><a href="mailto:support@montrealethics.ai">support@montrealethics.ai</a></strong></p><div><hr></div><h3>&#9989; Take Action:</h3><div><hr></div><p>Have an article, research paper, or news item we should feature? <strong>Leave us a comment below</strong> &#8212; we&#8217;d love to hear from you!</p>]]></content:encoded></item><item><title><![CDATA[The AI Ethics Brief #171: The Contradictions Defining AI's Future ]]></title><description><![CDATA[OpenAI's strategic reversals, why some philosophers say "ChatGPT is bullshit," and the battle between protection and privacy.]]></description><link>https://brief.montrealethics.ai/p/the-ai-ethics-brief-171-the-contradictions</link><guid isPermaLink="false">https://brief.montrealethics.ai/p/the-ai-ethics-brief-171-the-contradictions</guid><dc:creator><![CDATA[Montreal AI Ethics Institute]]></dc:creator><pubDate>Tue, 19 Aug 2025 14:01:29 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/ec806362-b5f7-4e9c-92e1-e56486c075fa_2560x3301.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>Welcome to <strong>The AI Ethics Brief</strong>, a bi-weekly publication by the Montreal AI Ethics Institute. We publish <strong>every other Tuesday at 10 AM ET.</strong> Follow MAIEI on <a href="https://bsky.app/profile/mtlaiethics.bsky.social">Bluesky</a> and <a href="https://www.linkedin.com/company/mtlaiethics">LinkedIn.</a> </em></p><p>&#10084;&#65039; <a href="https://montrealethics.ai/donate/">Support Our Work.</a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://brief.montrealethics.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://brief.montrealethics.ai/subscribe?"><span>Subscribe now</span></a></p><div><hr></div><h3><strong>&#128204; Editor&#8217;s Note</strong></h3><div><hr></div><h4><strong>In this Edition (TL;DR)</strong></h4><ul><li><p><strong>One Question We're Pondering:</strong> Are we witnessing the beginning of the end for large language models as we know them? We examine GPT-5's mixed reception, NVIDIA and Georgia Tech researchers arguing for smaller specialized models, and University of Glasgow philosophers' peer-reviewed argument that "ChatGPT is bullshit," revealing fundamental questions about AI's relationship with truth.</p></li></ul><ul><li><p><strong>OpenAI Returns to Open Source:</strong> We analyze OpenAI's first open-source models since GPT-2, examining the strategic contradiction of releasing capable models anyone can modify after years of justifying closed development for safety, with energy efficiency insights from Hugging Face researcher Sasha Luccioni.</p></li><li><p><strong>YouTube's AI Age Verification:</strong> We explore the platform's new machine learning technology that predicts user age to block inappropriate content, examining the tension between child protection and privacy rights.</p></li><li><p><strong>AI Copyright Battles Continue:</strong> Our <em>AI Policy Corner</em> with GRAIL at Purdue University examines the U.S. Copyright Office&#8217;s evolving guidance on works containing AI-generated material, including its January 2025 report and over 1,000 works containing some level of AI-generated material. Building on our coverage in <a href="https://brief.montrealethics.ai/i/166771550/beyond-fair-use-how-ai-copyright-battles-are-fragmenting-culture">Brief #168</a> on court rulings on fair use and <a href="https://brief.montrealethics.ai/i/167704348/from-opt-out-to-compensation-the-next-phase-of-ai-web-governance">Brief #169</a> on Cloudflare&#8217;s data governance shift, this development highlights how <a href="https://chatgptiseatingtheworld.com/2025/08/13/updated-map-of-us-copyright-lawsuits-v-ai-companies-aug-12-2025/">copyright</a> has become the central arena where courts, infrastructure providers, and regulators are negotiating the future of cultural production in the age of generative AI.</p></li><li><p><strong>Can Chatbots Replace Human Mental Health Support?</strong> Sabia Irfan examines the growing use of AI chatbots for mental health support, highlighting both accessibility benefits and concerns about emotional over-reliance and clinical oversight.</p></li></ul><p><strong>What connects these stories:</strong> The contradictions at the heart of AI development&#8212;between closed and open systems, revolutionary promises and incremental progress, protecting users while respecting privacy&#8212;revealing an industry grappling with its own stated values.</p><p><strong>Brief #171 Banner Image Credit: </strong><em>Behaviour Power</em> by Bart Fish &amp; Power Tools of AI, featured in <a href="https://betterimagesofai.org/images?artist=BartFish&amp;title=BehaviourPower">Better Images of AI,</a> licensed under <a href="https://creativecommons.org/licenses/by/4.0/">CC-BY 4.0</a>.</p><div><hr></div><h3>&#128270; One Question We&#8217;re Pondering:</h3><div><hr></div><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!bBvf!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!bBvf!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 424w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 848w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1272w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp" width="1456" height="137" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:137,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1704,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:&quot;&quot;,&quot;type&quot;:&quot;image/webp&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://brief.montrealethics.ai/i/159128290?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!bBvf!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 424w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 848w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1272w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><h4><strong>Are we witnessing the beginning of the end for large language models as we know them?</strong></h4><p>The release of <a href="https://techcrunch.com/2025/08/07/openais-gpt-5-is-here/">OpenAI's GPT-5</a> on August 7 earlier this month was supposed to mark another triumphant milestone in our era of perpetual AI improvement. CEO Sam Altman hyped it as offering <a href="https://openai.com/index/introducing-gpt-5/">"PhD-level intelligence"</a> and positioned it as a significant leap forward. Yet the reception has been notably mixed, with <a href="https://www.tomsguide.com/ai/chatgpt/chatgpt-5-users-are-not-impressed-heres-why-it-feels-like-a-downgrade">thousands of users on Reddit</a> calling it <a href="https://medium.com/data-science-in-your-pocket/gpt-5-openais-worst-release-yet-421558ad89f4">"horrible"</a> and <a href="https://garymarcus.substack.com/p/gpt-5-overdue-overhyped-and-underwhelming">"underwhelming."</a> </p><p>Critics have noted that it provides shorter, less nuanced responses while <a href="https://finance.yahoo.com/news/openai-gpt-5-met-mixed-212741623.html">limiting user access</a> to previously available models. Ironically, some of these complaints may stem from OpenAI's efforts to reduce what they call "sycophancy," the tendency for AI systems to be excessively agreeable, flattering, or validating rather than providing honest, balanced responses. While OpenAI claims this makes GPT-5 more truthful, many users are experiencing it as a loss of the warmth and personality they valued in earlier versions.</p><p>This lukewarm reception comes at a fascinating inflection point. While we chase ever-larger models, researchers at NVIDIA and Georgia Tech are making the case that we're heading in the wrong direction entirely. Their preprint paper <a href="https://arxiv.org/abs/2506.02153">"Small Language Models are the Future of Agentic AI"</a> argues that for most practical applications&#8212;especially the repetitive, specialized tasks that AI agents actually perform&#8212;smaller, more efficient models are not just adequate but superior.</p><p>Which brings us to a more fundamental question about the nature of these systems altogether. In their peer-reviewed paper <a href="https://doi.org/10.1007/s10676-024-09775-5">"ChatGPT is bullshit"</a> published in <em>Ethics and Information Technology</em>, philosophers Michael Townsen Hicks, James Humphries, and Joe Slater at the University of Glasgow build on Harry Frankfurt's influential work <em><a href="https://en.wikipedia.org/wiki/On_Bullshit">On Bullshit</a></em> to argue that large language models aren't trying to convey truth at all, they're designed to produce convincing-sounding text regardless of accuracy. </p><p>Unlike lying, which requires intent to deceive, or what the industry euphemistically calls <a href="https://techcrunch.com/2025/05/22/anthropic-ceo-claims-ai-models-hallucinate-less-than-humans/">"AI hallucinations,"</a> this represents something more concerning: systematic indifference to truth itself. As the authors put it, these systems "cannot themselves be concerned with truth" and are fundamentally "indifferent to the truth of their outputs." This matters because framing AI errors as mere "hallucinations" suggests the systems are trying but failing to perceive reality correctly, when in fact, they're not designed to care about accuracy at all.</p><p>Perhaps the mixed reviews of GPT-5 aren't a bug but a feature, a sign that we're finally recognizing these tools for what they actually are: sophisticated text generators optimized for plausibility rather than truth, whose utility may be better served by smaller, specialized models rather than ever-larger general-purpose ones.</p><div><hr></div><p>Please share your thoughts with the MAIEI community:</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://brief.montrealethics.ai/p/the-ai-ethics-brief-171-the-contradictions/comments&quot;,&quot;text&quot;:&quot;Leave a comment&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://brief.montrealethics.ai/p/the-ai-ethics-brief-171-the-contradictions/comments"><span>Leave a comment</span></a></p><div><hr></div><h3>&#128680; Here&#8217;s Our Take on What Happened Recently</h3><div><hr></div><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!bBvf!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!bBvf!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 424w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 848w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1272w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp" width="1456" height="137" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:137,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1704,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:&quot;&quot;,&quot;type&quot;:&quot;image/webp&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://brief.montrealethics.ai/i/159128290?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!bBvf!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 424w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 848w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1272w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><h4><strong>OpenAI Returns to Open Source with GPT-OSS Release</strong></h4><p>OpenAI released <a href="https://openai.com/index/introducing-gpt-oss/">GPT-OSS</a>, their first open-source models since GPT-2 in 2019. The release includes multiple model sizes designed to be <a href="https://huggingface.co/blog/sasha/gpt-oss-energy">energy-efficient</a> and capable of running on consumer hardware, <a href="https://huggingface.co/openai/gpt-oss-120b">available for download on Hugging Face</a>. This marks a significant shift for a company that has moved increasingly toward <a href="https://venturebeat.com/ai/openai-returns-to-open-source-roots-with-new-models-gpt-oss-120b-and-gpt-oss-20b/">closed, proprietary systems since GPT-3</a>. The announcement of GPT-OSS on August 5 came in the same week as GPT-5's launch on August 7.</p><h5><strong>&#128204; MAIEI&#8217;s Take and Why It Matters:</strong></h5><p>This release represents a notable contradiction in OpenAI's strategy. For years, the company justified keeping powerful models closed by citing <a href="https://openai.com/safety/">safety concerns</a>, yet here they're releasing capable models that anyone can download and modify.</p><p>The timing alongside GPT-5's mixed reception suggests this reflects competitive pressures rather than strategic planning. As companies like <a href="https://ai.meta.com/llama/">Meta</a> and <a href="https://mistral.ai/news/mistral-small-3-1">Mistral</a> push open-source alternatives, OpenAI appears to be recognizing that maintaining relevance in an increasingly open ecosystem may matter more than controlling access to powerful models.</p><p>More significantly, this aligns with growing arguments that <a href="https://arxiv.org/abs/2506.02153">smaller, specialized models may be the future</a>. GPT-OSS represents OpenAI acknowledging that not every use case requires flagship models, potentially signalling a more nuanced approach where different model sizes serve different needs.</p><p>Beyond strategic considerations, the environmental implications are significant. As Hugging Face researcher Sasha Luccioni demonstrates in <a href="https://huggingface.co/blog/sasha/gpt-oss-energy">her analysis</a>, GPT-OSS models consume substantially less energy while maintaining competitive performance. The cumulative impact of millions of users running models locally rather than through energy-intensive cloud infrastructure could be considerable.</p><p>If OpenAI is comfortable releasing these models openly, it raises questions about whether their previous arguments for closed development were primarily about safety or about maintaining competitive advantage. As the industry faces mounting pressures around <a href="https://fortune.com/2025/08/14/data-centers-china-grid-us-infrastructure/">energy consumption</a>, <a href="https://www.lawfaremedia.org/article/liberal-democracies-are-retreating-from-ai-safety">shifting policy priorities, </a>and <a href="https://www.techpolicy.press/one-year-on-eu-ai-act-collides-with-new-political-reality/">evolving regulatory frameworks, </a>GPT-OSS suggests that openness may become less of an ideological choice and more of a business necessity.   </p><div><hr></div><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!bBvf!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!bBvf!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 424w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 848w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1272w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp" width="1456" height="137" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:137,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1704,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:&quot;&quot;,&quot;type&quot;:&quot;image/webp&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://brief.montrealethics.ai/i/159128290?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!bBvf!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 424w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 848w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1272w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><h4><strong>YouTube&#8217;s AI-Powered Age Verification: An Invasion of Privacy or a Mechanism to Protect Children?</strong></h4><p>YouTube began testing<a href="https://apnews.com/article/youtube-video-age-verification-system-1ce99a7089b33e88dc76e49945ded186"> machine learning age-verification technology</a> on a subset of American users this past week. The technology will predict a user&#8217;s age according to several factors, including<a href="https://blog.youtube/news-and-events/extending-our-built-in-protections-to-more-teens-on-youtube/"> video preferences and the account creation date</a>. If the user is inferred to be under the age of 18, they will be blocked access to content deemed inappropriate, receive well-being reminders, and not be sent targeted ads. In the event of incorrect age estimation, users may prove they are an adult by providing their credit card or a government ID. If the trial is successful, this technology, which has already been implemented in other markets, will expand throughout the United States.</p><h5><strong>&#128204; MAIEI&#8217;s Take and Why It Matters:</strong></h5><p>YouTube&#8217;s age verification measures are only one example of a global trend. For instance, earlier this summer, the United States Supreme Court <a href="https://apnews.com/article/supreme-court-porn-age-verification-texas-12a73197796fe8c4bef0d888259543cf">ruled in favour</a> of a Texas law requiring identification to view sexually explicit material with the aim of protecting children from viewing such content. Moreover, the European Commission is entering a pilot phase of testing <a href="https://digital-strategy.ec.europa.eu/en/policies/eu-age-verification">online age verification measures</a> that will be compatible with new <a href="https://digital-strategy.ec.europa.eu/en/policies/eudi-regulation">European Union Digital Identity Wallets</a>.</p><p>These measures are highly controversial. They raise digital privacy concerns in an era of increased government surveillance and potentially <a href="https://www.aclu.org/press-releases/fsc-paxton-age-verification">infringe on internet free speech</a>. Data breaches are also a serious concern, such as in the UK, where many sites are utilizing third-party platforms to meet the age verification requirements of the <a href="https://www.eff.org/deeplinks/2025/08/no-uks-online-safety-act-doesnt-make-children-safer-online">Online Safety Act</a>. YouTube and many third-party verification platforms pledge that <a href="https://www.clrn.org/how-do-you-confirm-your-age-on-youtube/?utm_source=chatgpt.com">data is tightly secured</a> to protect against leaks. However, such measures are often inefficient. For instance, the privacy of <a href="https://www.nytimes.com/2025/07/26/us/tea-safety-dating-app-hack.html">72,000 images on Tea</a> (a controversial women-only app through which users share safety information and perspectives on potential male partners) was compromised, many of which were gender verification images that were supposed to be deleted quickly after verification. </p><p>On the other hand, such regulations are important to protect the well-being of children, preventing them from viewing sexually explicit, <a href="https://services.google.com/fh/files/misc/vibe-expansion-gtm-casestudy.pdf">violent, and otherwise harmful content</a>, in addition to guarding against predatory advertising methods. Platforms such as YouTube have a history of hosting material aimed at exploiting minors. A famous example of this phenomenon is the <a href="https://en.wikipedia.org/wiki/Elsagate">Elsagate scandal</a>, through which platforms utilized &#8220;educational&#8221; themes and popular children&#8217;s characters from Disney and Nick Jr. to slip lewd and explicit content through the filtering mechanisms of YouTube Kids. For instance, a video with the <a href="https://www.nytimes.com/2017/11/04/business/media/youtube-kids-paw-patrol.html">stated purpose of facilitating the &#8220;learning and development of children!&#8221;</a> depicts Paw Patrol characters in a strip club.</p><p>Age verification is a difficult issue as it puts two important values head-to-head: digital privacy and protecting children from harm. Promising advancements such as <a href="https://www.theblock.co/post/352865/google-wallet-integrates-zk-proofs-a-tech-incubated-by-crypto-industry">Zero Knowledge Proof (ZKP) technology adopted by Google </a>and potentially <a href="https://eu-digital-identity-wallet.github.io/eudi-doc-architecture-and-reference-framework/2.4.0/discussion-topics/g-zero-knowledge-proof/">EU Digital Identity Wallets</a> may serve as future solutions, but it is very difficult to fully protect one&#8217;s privacy online. Moving forward, effective solutions must balance protecting user privacy with safeguarding children from harm.</p><div><hr></div><p><strong>Did we miss anything? Let us know in the comments below.</strong></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://brief.montrealethics.ai/p/the-ai-ethics-brief-171-the-contradictions/comments&quot;,&quot;text&quot;:&quot;Leave a comment&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://brief.montrealethics.ai/p/the-ai-ethics-brief-171-the-contradictions/comments"><span>Leave a comment</span></a></p><div><hr></div><h3>&#128173; Insights &amp; Perspectives:</h3><div><hr></div><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!bBvf!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!bBvf!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 424w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 848w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1272w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp" width="1456" height="137" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:137,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1704,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:&quot;&quot;,&quot;type&quot;:&quot;image/webp&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://brief.montrealethics.ai/i/159128290?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!bBvf!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 424w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 848w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1272w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><h4><strong>AI Policy Corner: U.S. Copyright Guidance on Works Created with AI</strong></h4><p>This edition of our <a href="https://montrealethics.ai/category/insights/ai-policy-corner/">AI Policy Corner, </a>produced in partnership with the <a href="https://cla.purdue.edu/academic/polsci/research/labs/grail/index.html">Governance and Responsible AI Lab (GRAIL)</a> at Purdue University, explores how the U.S. Copyright Office&#8217;s 2023 guidance on works containing AI-generated material is shaping the legal landscape. The piece focuses on the human authorship requirement, clarifying that only work reflecting a human&#8217;s original creative input is eligible for protection. It also outlines the threshold for &#8220;sufficient human authorship,&#8221; noting that prompt-based generation typically does not qualify. A follow-up report issued in January 2025 reaffirms that existing copyright law remains flexible enough to apply to generative AI, while emphasizing the need for discernible human contribution. Since the guidance was issued, the Office has registered over 1,000 works containing some level of AI-generated material, reflecting how its evolving interpretation is being tested in practice and shaping ongoing debates around authorship, ownership, and creative accountability.</p><blockquote><p><em>To dive deeper, <strong><a href="https://montrealethics.ai/ai-policy-corner-u-s-copyright-guidance-on-works-created-with-ai/">read the full article here</a></strong><a href="https://montrealethics.ai/ai-policy-corner-u-s-copyright-guidance-on-works-created-with-ai/">.</a></em></p></blockquote><h4><strong>Can Chatbots Replace Human Mental Health Support?</strong></h4><p>In this op-ed, Sabia Irfan examines the growing use of AI chatbots for mental health support, highlighting a 2025 survey in which nearly half of Americans reported using large language models for psychological support, with 75 percent seeking help for anxiety and nearly 60 percent for depression. This marks a dramatic shift from a 2021 survey by Woebot, which found that only 22 percent of adults had used a mental health chatbot, though 47 percent expressed willingness to try one. While these tools offer accessible, non-judgmental support, Irfan raises concerns about emotional over-reliance, lack of clinical oversight, and the risks of unsafe responses, including a recent case involving Character.ai. The piece also highlights recent legislation in Illinois that restricts the use of AI in mental health services, highlighting the need for clearer boundaries, stronger safeguards, and professional involvement in the development of AI therapeutic tools.</p><blockquote><p><em>To dive deeper, <strong><a href="https://montrealethics.ai/can-chatbots-replace-human-mental-health-support/">read the full article here</a></strong><a href="https://montrealethics.ai/can-chatbots-replace-human-mental-health-support/">.</a></em></p></blockquote><div><hr></div><h3><strong>&#10084;&#65039; Support Our Work</strong></h3><div><hr></div><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!bzIU!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!bzIU!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png 424w, https://substackcdn.com/image/fetch/$s_!bzIU!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png 848w, https://substackcdn.com/image/fetch/$s_!bzIU!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png 1272w, https://substackcdn.com/image/fetch/$s_!bzIU!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!bzIU!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png" width="1456" height="213" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:213,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://brief.montrealethics.ai/i/156386126?img=https%3A%2F%2Fsubstackcdn.com%2Fimage%2Ffetch%2Ff_auto%2Cq_auto%3Agood%2Cfl_progressive%3Asteep%2Fhttps%253A%252F%252Fsubstack-post-media.s3.amazonaws.com%252Fpublic%252Fimages%252F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!bzIU!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png 424w, https://substackcdn.com/image/fetch/$s_!bzIU!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png 848w, https://substackcdn.com/image/fetch/$s_!bzIU!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png 1272w, https://substackcdn.com/image/fetch/$s_!bzIU!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p>Please help us keep <em><strong>The AI Ethics Brief</strong></em> free and accessible for everyone by becoming a paid subscriber on Substack or making a donation at <strong><a href="https://montrealethics.ai/donate">montrealethics.ai/donate</a>. </strong>Your support sustains our mission of democratizing AI ethics literacy and honours <strong><a href="https://montrealethics.ai/open-letter-moving-forward-together/">Abhishek Gupta&#8217;s</a></strong> legacy.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://brief.montrealethics.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://brief.montrealethics.ai/subscribe?"><span>Subscribe now</span></a></p><p>For corporate partnerships or larger contributions, please contact us at <strong><a href="mailto:support@montrealethics.ai">support@montrealethics.ai</a></strong></p><div><hr></div><h3>&#9989; Take Action:</h3><div><hr></div><p>Have an article, research paper, or news item we should feature? <strong>Leave us a comment below</strong> &#8212; we&#8217;d love to hear from you!</p>]]></content:encoded></item><item><title><![CDATA[The AI Ethics Brief #170: How the US and China Are Reshaping AI Geopolitics ]]></title><description><![CDATA[When deregulation meets multilateralism, privacy fails, and 'AI for Good' censors dissent.]]></description><link>https://brief.montrealethics.ai/p/the-ai-ethics-brief-170-how-the-us</link><guid isPermaLink="false">https://brief.montrealethics.ai/p/the-ai-ethics-brief-170-how-the-us</guid><dc:creator><![CDATA[Montreal AI Ethics Institute]]></dc:creator><pubDate>Tue, 05 Aug 2025 14:03:08 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/fbc1963a-b197-4558-9f41-7a2fba7114ca_1080x608.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>Welcome to <strong>The AI Ethics Brief</strong>, a bi-weekly publication by the Montreal AI Ethics Institute. We publish <strong>every other Tuesday at 10 AM ET.</strong> Follow MAIEI on <a href="https://bsky.app/profile/mtlaiethics.bsky.social">Bluesky</a> and <a href="https://www.linkedin.com/company/mtlaiethics">LinkedIn.</a> </em></p><p>&#10084;&#65039; <a href="https://montrealethics.ai/donate/">Support Our Work.</a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://brief.montrealethics.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://brief.montrealethics.ai/subscribe?"><span>Subscribe now</span></a></p><div><hr></div><h3><strong>&#128204; Editor&#8217;s Note</strong></h3><div><hr></div><h4><strong>In this Edition (TL;DR)</strong></h4><ul><li><p>We examine the <strong>near-simultaneous release of competing AI governance visions</strong>, the US AI Action Plan and China's Global AI Governance Action Plan, and explore what this superpower bifurcation means for the rest of the world.</p></li><li><p>From this 30,000-foot view, we take a <strong>concrete look at what the US AI Action Plan means in practice</strong> through environmental concerns and state rights, showing how it treats local state autonomy as a barrier rather than a cornerstone of American federalism.</p></li><li><p>We explore <strong>privacy failures through the lens of thousands of ChatGPT conversations</strong> that became publicly searchable on Google, exposing deeply personal mental health discussions before the feature was removed, revealing the gap between user expectations and platform design.</p></li><li><p>We share <strong>two contrasting perspectives on the UN AI for Good Summit 2025, </strong>the troubling censorship of keynote speaker Dr. Abeba Birhane, alongside the concrete policy developments highlighted in our AI Policy Corner with GRAIL at Purdue University.</p></li><li><p>Finally, we <strong>conclude our four-part AVID blog series on red teaming as critical thinking</strong>, introducing military-derived techniques like premortems and the Five Whys to help organizations identify AI system vulnerabilities through systematic analysis.</p></li></ul><p><strong>What connects these stories:</strong> The persistent gap between stated intentions and actual practice in AI governance, whether in international cooperation, environmental protection, user privacy, or ethical discourse, revealing how institutional priorities consistently override genuine accountability.</p><p><strong>Brief #170 Banner Image Credit: </strong><em>Pink Office</em> by Jamillah Knowles &amp; Digit, featured in <a href="https://betterimagesofai.org/images?artist=JamillahKnowles&amp;title=PinkOffice">Better Images of AI,</a> licensed under <a href="https://creativecommons.org/licenses/by/4.0/">CC-BY 4.0</a>.</p><div><hr></div><h3>&#128270; One Question We&#8217;re Pondering:</h3><div><hr></div><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!bBvf!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!bBvf!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 424w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 848w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1272w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp" width="1456" height="137" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:137,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1704,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:&quot;&quot;,&quot;type&quot;:&quot;image/webp&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://brief.montrealethics.ai/i/159128290?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!bBvf!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 424w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 848w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1272w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><h4><strong>When the world's two AI superpowers unveil competing visions within days of each other, are we witnessing strategic coordination or dangerous fragmentation in global AI governance? </strong></h4><p>On July 23, 2025, the Trump administration released <em><a href="https://www.whitehouse.gov/articles/2025/07/white-house-unveils-americas-ai-action-plan/">Winning the Race: America&#8217;s AI Action Plan,</a></em> the culmination of <a href="https://www.whitehouse.gov/presidential-actions/2025/01/removing-barriers-to-american-leadership-in-artificial-intelligence/">Executive Order 14179</a> signed on January 23, 2025. The Action Plan outlines over <a href="https://www.whitehouse.gov/articles/2025/07/white-house-unveils-americas-ai-action-plan/?utm_source=chatgpt.com">90 Federal policy actions</a> across three pillars: Accelerating Innovation, Building American AI Infrastructure, and Leading in International AI Diplomacy and Security. It was accompanied by three Executive Orders addressing <a href="https://www.whitehouse.gov/presidential-actions/2025/07/preventing-woke-ai-in-the-federal-government/">&#8220;Woke AI&#8221;</a> prevention in the federal government, <a href="https://www.whitehouse.gov/presidential-actions/2025/07/accelerating-federal-permitting-of-data-center-infrastructure/">accelerating data center permitting,</a> and <a href="https://www.whitehouse.gov/presidential-actions/2025/07/promoting-the-export-of-the-american-ai-technology-stack/">promoting AI technology exports.</a> </p><p>Three days later, on July 26, 2025, China unveiled its <a href="https://english.www.gov.cn/news/202507/26/content_WS6884bea8c6d0868f4e8f4732.html">Global AI Governance Action Plan</a> at the World Artificial Intelligence Conference in Shanghai. The Governance Plan proposes a <a href="https://www.ansi.org/standards-news/all-news/2025/08/8-1-25-china-announces-action-plan-for-global-ai-governance">13-point roadmap</a> for multilateral cooperation and UN-anchored standards, part of Premier Li Qiang&#8217;s call for a &#8220;<a href="https://english.www.gov.cn/news/202507/26/content_WS6884bea8c6d0868f4e8f4732.html">global AI cooperation organization</a>&#8221; headquartered in <a href="https://english.shanghai.gov.cn/en-WAICLatest/20250727/962fc5e93b78432b99cdb883ae614ed5.html">Shanghai.</a> Building on President Xi Jinping's <em><a href="https://www.mfa.gov.cn/eng/zy/gb/202405/t20240531_11367503.html">2023 Global AI Governance Initiative,</a></em> the Governance Plan reflects China's ambition to shape global AI governance through infrastructure development, open-source collaboration, and international cooperation.</p><p>Taken together, these near-simultaneous releases signal a clear bifurcation:</p><ol><li><p><strong>U.S. Deregulation-First:</strong> The Action Plan explicitly dismantles Biden-era AI safeguards, removing references to bias, climate impact, and diversity, and insists federal agencies procure only &#8220;truth-seeking&#8221; and &#8220;ideologically neutral&#8221; systems. As <strong>Tech Policy Press</strong> warns us in two must-read articles on this topic, this is <a href="https://www.techpolicy.press/the-trump-ai-action-plan-is-deregulation-framed-as-innovation/">deregulation framed as innovation,</a> privileging speed and scale over democratic safeguards, while also having the potential to <a href="https://www.techpolicy.press/trumps-order-against-woke-ai-will-create-real-harm/">create real harms from politicized AI design</a>. Potential pitfalls include biased hiring algorithms, opaque public-sector AI tools for policing, inequitable health-care and education models, and the sidelining of climate-informed AI research. </p></li></ol><blockquote><p><strong>Related:</strong> In <a href="https://brief.montrealethics.ai/i/166771550/are-civil-liberties-at-risk-when-algorithmic-surveillance-becomes-the-new-playbook-for-immigration-enforcement">Brief #168</a>, we examined how AI-powered immigration enforcement has expanded predictive policing and surveillance capabilities, eroding civil liberties protections in the process. </p></blockquote><ol start="2"><li><p><strong>China&#8217;s Multilateralism:</strong> Beijing&#8217;s Governance Plan positions China as the champion of trustworthy AI. It leverages development aid, standards harmonization, and UN forums to build soft power and set global norms, while simultaneously cultivating &#8220;homegrown alternatives&#8221; to Western AI stacks. Most notable is the <a href="https://www.businesstimes.com.sg/companies-markets/telcos-media-tech/chinese-ai-firms-form-alliances-build-domestic-ecosystem-amid-us-curbs">Model-Chip Ecosystem Innovation Alliance</a>, announced July 28 at the conclusion of the World Artificial Intelligence Conference in Shanghai, which unites Chinese LLM developers and domestic AI chip manufacturers, reducing reliance on foreign technology and bolstering China&#8217;s standard-setting influence.</p></li></ol><h5>Global Implications: The View from Canada and Beyond</h5><p>For Canada and other middle powers, this bifurcation presents both opportunities and risks. The US Action Plan&#8217;s emphasis on allied cooperation through <a href="https://www.whitehouse.gov/presidential-actions/2025/07/promoting-the-export-of-the-american-ai-technology-stack/">full-stack American AI export packages </a>suggests potential benefits for countries that align with American AI infrastructure. However, the administration's <a href="https://montrealethics.ai/the-paris-ai-summit-deregulation-fear-and-surveillance/">rejection of international AI safety frameworks</a> could leave allies exposed to regulatory arbitrage and a race to the bottom in AI standards. </p><p>Canada's own <a href="https://ised-isde.canada.ca/site/ai-strategy/en">AI governance approach,</a> emphasizing human rights, transparency, and multilateral cooperation, sits between these two approaches. The question becomes: can smaller nations maintain sovereign AI governance standards when superpowers are racing to the bottom on safety while competing for technological dominance?</p><h5>The Missing Middle Ground</h5><p>What's most concerning is the risk of <strong>regulatory capture </strong>in AI governance. As discussed in the April 2025 edition of the <a href="https://harvardlawreview.org/print/vol-138/co-governance-and-the-future-of-ai-regulation/">Harvard Law Review</a>, regulatory capture occurs when agencies created to act in the public interest instead advance commercial or political concerns of special interest groups. Both plans appear designed to serve corporate interests through different mechanisms, while genuine public interest considerations are relegated to secondary status.</p><p>The rapid succession of these announcements suggests we're entering a new phase of AI geopolitics where governance frameworks become tools of strategic competition rather than collaborative efforts to manage shared risks. For the international community, this raises fundamental questions about whether effective AI governance is possible in an increasingly multipolar world where technological sovereignty overshadows cooperative safety measures.</p><h5>Looking Forward</h5><p>As AI systems become more powerful and pervasive, the stakes of this governance competition extend far beyond national competitiveness. Climate models, medical diagnoses, financial systems, and democratic processes all depend on AI systems whose development is now explicitly caught between competing ideological and geopolitical frameworks.</p><p>The next few months will reveal whether this represents a temporary divergence or a permanent fracture in global AI governance. For Canada and other nations seeking to chart an independent course, the challenge is maintaining a commitment to evidence-based, rights-respecting AI governance while navigating pressure to choose sides in the technological competition between Washington DC and Beijing.</p><blockquote><p><strong>Recommended Reading:</strong> <a href="https://www.brookings.edu/articles/what-to-make-of-the-trump-administrations-ai-action-plan/">What to make of the Trump administration&#8217;s AI Action Plan (Brookings)</a></p></blockquote><div><hr></div><p>Please share your thoughts with the MAIEI community: </p><blockquote><p><strong>Are we witnessing the beginning of parallel AI ecosystems, or can international cooperation on AI governance survive this moment of strategic competition?</strong></p></blockquote><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://brief.montrealethics.ai/p/the-ai-ethics-brief-170-how-the-us/comments&quot;,&quot;text&quot;:&quot;Leave a comment&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://brief.montrealethics.ai/p/the-ai-ethics-brief-170-how-the-us/comments"><span>Leave a comment</span></a></p><div><hr></div><h3>&#128680; Here&#8217;s Our Take on What Happened Recently</h3><div><hr></div><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!bBvf!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!bBvf!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 424w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 848w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1272w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp" width="1456" height="137" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:137,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1704,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:&quot;&quot;,&quot;type&quot;:&quot;image/webp&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://brief.montrealethics.ai/i/159128290?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!bBvf!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 424w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 848w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1272w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><h4><strong>Environmental Concerns and State Rights Under Trump&#8217;s AI Action Plan</strong></h4><p>President Trump&#8217;s aim to <a href="https://www.politico.com/news/2025/07/23/trump-derides-copyright-and-state-regs-in-ai-action-plan-launch-00472443">&#8220;do everything possible to expedite construction of all major AI infrastructure projects</a>&#8221; includes sacrificing national land and environmental protections, as revealed in the <a href="https://www.whitehouse.gov/wp-content/uploads/2025/07/Americas-AI-Action-Plan.pdf">AI Action Plan</a>. Notably, the Action Plan provides policy recommendations such as exempting data centers from <a href="https://www.epa.gov/nepa/what-national-environmental-policy-act">National Environmental Policy Act (NEPA)</a> provisions, constructing data centers on federal lands, accelerating environmental permitting, and considering <a href="https://www.epa.gov/cwa-404/permit-program-under-cwa-section-404">Clean Water Act Section 404 permits</a> to allow data centers to discharge <a href="https://www.epa.gov/cwa-404/further-revisions-cwa-regulatory-definition-discharge-dredged-material">dredged</a> or <a href="https://www.epa.gov/cwa-404/final-revisions-clean-water-act-regulatory-definitions-fill-material-and-discharge-fill-0">fill</a> materials into water sources. Moreover, the plan seeks to &#8220;consider a state&#8217;s AI regulatory climate when making funding decisions,&#8221; and decrease funding for states with heightened regulations.</p><h5><strong>&#128204; MAIEI&#8217;s Take and Why It Matters:</strong></h5><p>Such actions depict a growing &#8220;win at all costs&#8221; mentality in the US-China tech arms race, rolling back Biden-era climate policies and exposing federal lands to environmental degradation, as the Trump administration works to bypass NEPA protections. Data centers, which house the physical infrastructure for AI tools, are known to have a significant environmental impact, especially on <a href="https://www.innovatingcanada.ca/technology/future-of-ai-2024/artificial-intelligence-not-so-artificial/">water availability and the electricity grid</a>. While the state is facing severe droughts, Texan data centers are utilizing millions of gallons of water, with <a href="https://economictimes.indiatimes.com/news/international/us/texas-ai-data-centers-water-usage-texas-ai-centers-guzzle-463-million-gallons-now-residents-are-asked-to-cut-back-on-showers-ai-news/articleshow/122983253.cms?from=mdr">463 million gallons</a> used between 2023 and 2024 alone. </p><p>Furthermore, as MAIEI reported in <a href="https://brief.montrealethics.ai/i/161541651/when-ai-overloads-the-grid-from-new-zealand-to-spain">Brief #164</a>, data centers are undermining energy stability in New Zealand and Spain. Although the AI Action Plan seeks to <a href="https://www.whitehouse.gov/wp-content/uploads/2025/07/Americas-AI-Action-Plan.pdf">&#8220;Develop a grid to match the pace of AI innovation</a>,&#8221; new projects to expand the grid often first require <a href="https://www.reuters.com/legal/government/trump-plans-executive-orders-power-ai-growth-race-with-china-2025-06-27/">comprehensive multi-year impact studies</a>. The electricity grid is strained already; new pressures cannot be added while the current infrastructure is insufficient.</p><p>In<a href="https://brief.montrealethics.ai/i/165534371/the-real-cost-of-data-centres-economic-promise-and-environmental-trade-offs"> Brief #167</a>, we argued that communities should exercise their autonomy to push back against the many problematic aspects of data center development projects. States like South Carolina have done precisely this,<a href="https://www.thecentersquare.com/national/article_c6cca580-45c5-4c7f-95c4-1dc5e9524f8a.html"> increasing electricity costs</a> for data centers and other significant energy consumers. The state is also considering<a href="https://www.thecentersquare.com/national/article_c6cca580-45c5-4c7f-95c4-1dc5e9524f8a.html"> capping data center tax incentives</a> to protect residents from spiking electricity costs. Despite these developments, the second-largest investment in South Carolina&#8217;s history, a<a href="https://scdailygazette.com/2025/04/28/good-and-instructive-news-on-data-centers/"> $2.8 billion data center in Spartanburg County</a>, was announced this spring. South Carolina&#8217;s example shows that community-driven pushback can safeguard resident interests without deterring headline-making investments.</p><p>Yet, Trump&#8217;s AI Action Plan will decrease funding to states with environmental and data center regulations deemed incompatible with rapid development, a chilling declaration in the wake of the One Big Beautiful Bill Act provision, which sought a<a href="https://www.quarles.com/newsroom/publications/no-ai-moratorium-for-now-but-what-comes-next"> 10-year moratorium</a> on state AI regulation (the Senate ultimately removed this provision). Alongside the increased demand for locally destructive and state resource-taxing data centers, federal respect for state autonomy is dwindling.</p><p>By penalizing states for exercising stronger environmental or zoning rules, the AI Action Plan flips the usual respect for states&#8217; rights on its head, treating local autonomy as a barrier rather than a cornerstone of American federalism.</p><div><hr></div><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!bBvf!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!bBvf!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 424w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 848w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1272w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp" width="1456" height="137" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:137,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1704,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:&quot;&quot;,&quot;type&quot;:&quot;image/webp&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://brief.montrealethics.ai/i/159128290?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!bBvf!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 424w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 848w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1272w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><h4><strong>ChatGPT Privacy Breach: When Shared Conversations Become Google Search Results</strong></h4><p>In late July 2025, <a href="https://techcrunch.com/2025/07/31/your-public-chatgpt-queries-are-getting-indexed-by-google-and-other-search-engines/">thousands of ChatGPT conversations</a> began appearing in Google search results after users clicked the platform's <a href="https://www.fastcompany.com/91376687/google-indexing-chatgpt-conversations">"Share" feature.</a> Nearly 4,500 conversations became publicly indexed, including discussions about deeply personal details like addiction struggles, physical abuse experiences, and severe mental health issues.</p><p>The mechanism was straightforward: when users clicked "Share" to send conversations to friends or save URLs, ChatGPT created publicly accessible links that weren't protected from Google's indexing. Many users appeared unaware that their conversations would become publicly searchable, presumably thinking they were sharing with a small audience.</p><p>OpenAI's Chief Information Security Officer, <a href="https://x.com/cryps1s/status/1951041845938499669">Dane Stuckey,</a> quickly announced the removal of the feature after widespread criticism, describing it as "a short-lived experiment to help people discover useful conversations." However, OpenAI maintains that users have to explicitly select an option to make chats visible in web searches.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!OTUd!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8eb8f9fc-a6c5-4d30-aeb3-baf0e1a355d2_532x800.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!OTUd!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8eb8f9fc-a6c5-4d30-aeb3-baf0e1a355d2_532x800.png 424w, https://substackcdn.com/image/fetch/$s_!OTUd!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8eb8f9fc-a6c5-4d30-aeb3-baf0e1a355d2_532x800.png 848w, https://substackcdn.com/image/fetch/$s_!OTUd!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8eb8f9fc-a6c5-4d30-aeb3-baf0e1a355d2_532x800.png 1272w, https://substackcdn.com/image/fetch/$s_!OTUd!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8eb8f9fc-a6c5-4d30-aeb3-baf0e1a355d2_532x800.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!OTUd!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8eb8f9fc-a6c5-4d30-aeb3-baf0e1a355d2_532x800.png" width="532" height="800" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/8eb8f9fc-a6c5-4d30-aeb3-baf0e1a355d2_532x800.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:800,&quot;width&quot;:532,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:142376,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://brief.montrealethics.ai/i/169449213?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8eb8f9fc-a6c5-4d30-aeb3-baf0e1a355d2_532x800.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!OTUd!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8eb8f9fc-a6c5-4d30-aeb3-baf0e1a355d2_532x800.png 424w, https://substackcdn.com/image/fetch/$s_!OTUd!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8eb8f9fc-a6c5-4d30-aeb3-baf0e1a355d2_532x800.png 848w, https://substackcdn.com/image/fetch/$s_!OTUd!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8eb8f9fc-a6c5-4d30-aeb3-baf0e1a355d2_532x800.png 1272w, https://substackcdn.com/image/fetch/$s_!OTUd!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8eb8f9fc-a6c5-4d30-aeb3-baf0e1a355d2_532x800.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><a href="https://x.com/cryps1s/status/1951041845938499669">OpenAI CISO Dane Stuckey on X (July 31, 2025)</a></figcaption></figure></div><p>The timing is particularly concerning given that <a href="https://sentio.org/ai-blog/ai-survey">nearly half of Americans in a survey</a> from earlier this year say they&#8217;ve used large language models for psychological support in the past year, with 75% seeking help with anxiety and nearly 60% for depression.</p><h5><strong>&#128204; MAIEI&#8217;s Take and Why It Matters:</strong></h5><p>This incident exposes a fundamental misalignment between how users understand AI privacy and how these systems actually work. While OpenAI claims users had to explicitly select search visibility, the real issue is the <strong>cognitive gap</strong> between user intent and platform design.</p><p>When someone shares a ChatGPT conversation, their mental model involves targeted sharing, like forwarding a text message. The leap from &#8220;sharing with a friend&#8221; to &#8220;making globally searchable&#8221; represents a design failure that prioritizes data exposure over user expectations. As <a href="https://www.fastcompany.com/91376687/google-indexing-chatgpt-conversations">privacy scholar Carissa Veliz</a> noted about the incident: </p><blockquote><p><em>&#8220;As a privacy scholar, I&#8217;m very aware that that data is not private, but of course, &#8216;not private&#8217; can mean many things, and that Google is logging in these extremely sensitive conversations is just astonishing.&#8221;</em></p></blockquote><p>This follows a troubling pattern where convenience features mask significant privacy trade-offs. Similar issues emerged with <a href="https://www.bbc.com/news/articles/c0573lj172jo">Meta's AI systems,</a> which began sharing user queries in public feeds. As <a href="https://www.bbc.com/news/articles/c0573lj172jo">cybersecurity analyst Rachel Tobac</a> observed: </p><blockquote><p><em>&#8220;Many users aren't fully grasping that these platforms have features that could unintentionally leak their most private questions, stories, and fears.&#8221;</em></p></blockquote><p>This incident represents more than individual privacy violations. It signals the normalization of what <a href="https://utopiainbeta.substack.com/p/online-households-and-digital-gossip">journalist and writer Isabelle Castro</a> describes as our data becoming <strong>&#8220;the digital gossip of our time,&#8221;</strong> where corporations and governments collect personal information and are &#8220;poised to use something that is ours for their gain,&#8221; just as the gossip industry did in Warren and Brandeis's era.</p><p>When AI conversations about mental health, trauma, and personal struggles become searchable public records, it exposes the most intimate aspects of users' lives without their informed consent. Both the <a href="https://venturebeat.com/ai/openai-removes-chatgpt-feature-after-private-conversations-leak-to-google-search/">OpenAI</a> and <a href="https://www.bbc.com/news/articles/c0573lj172jo">Meta AI</a> incidents reveal a fundamental breakdown in the expectation that personal conversations, even with AI systems, maintain some boundary between private reflection and public exposure.</p><p>OpenAI CEO Sam Altman <a href="https://www.cnbctv18.com/technology/chatgpt-therapy-conversations-may-not-be-private-warns-openai-ceo-sam-altman-19644082.htm">recently warned</a> that user conversations with ChatGPT could be subpoenaed and used in court, noting, &#8220;If someone confides their most personal issues to ChatGPT, and that ends up in legal proceedings, we could be compelled to hand that over.&#8221; These incidents demonstrate why privacy must be designed into AI systems from the ground up, rather than addressed after public outcry.</p><p>Building trustworthy AI requires fundamentally rethinking how we architect these platforms to respect user mental models of privacy and provide meaningful control over personal data. As we integrate AI into intimate aspects of our lives, robust privacy protections become essential for maintaining the trust these systems need to function effectively.</p><div><hr></div><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!bBvf!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!bBvf!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 424w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 848w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1272w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp" width="1456" height="137" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:137,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1704,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:&quot;&quot;,&quot;type&quot;:&quot;image/webp&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://brief.montrealethics.ai/i/159128290?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!bBvf!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 424w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 848w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1272w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><h4><strong>Behind the 'AI for Good' Brand: When Industry Summits Censor Dissent</strong></h4><p>While the <a href="https://aiforgood.itu.int/summit25/">UN's AI for Good Global Summit 2025,</a> held from July 8 to 11 in Geneva, brought together over 11,000 participants to discuss AI governance and launched several new initiatives, a troubling incident of censorship revealed the gap between the summit's stated mission and its actual treatment of critical voices. <a href="https://aial.ie/blog/2025-ai-for-good-summit/">Dr. Abeba Birhane,</a> a leading AI ethics researcher, was reportedly pressured by conference organizers at the International Telecommunication Union (ITU) to remove critical content from her keynote speech. </p><p>According to a <a href="https://aial.ie/blog/2025-ai-for-good-summit/">blog post</a> published on the <a href="https://aial.ie/">Artificial Intelligence Accountability Lab (AIAL)</a> at Trinity College Dublin, Dr. Birhane was forced to take down slides and remove anything that mentioned &#8220;Gaza&#8221; or &#8220;Palestine&#8221; or &#8220;Israel,&#8221; and editing the word &#8220;genocide&#8221; to &#8220;war crimes,&#8221; until a single slide that called for &#8220;No AI for War Crimes&#8221; remained.</p><p>The controversy intensified when organizers allegedly gave her an ultimatum just two hours before her scheduled talk. Despite the pressure, Dr. Birhane proceeded with a heavily modified version of her presentation, later publishing both the <a href="https://aial.ie/blog/2025-ai-for-good-summit/slides_original.pdf">original</a> and <a href="https://aial.ie/blog/2025-ai-for-good-summit/slides_censored.pdf">censored</a> versions of her slides to highlight the extent of the censorship at what's supposed to be a &#8220;social good&#8221; event.</p><p>As <a href="https://aial.ie/blog/2025-ai-for-good-summit/">Dr. Birhane wrote:</a></p><blockquote><p>The general global trend towards authoritarianism, censorship and silencing of academics, journalists alike that stand up for fundamental rights, the rule of law, and justice is difficult to deal with in the current climate. But for a Summit that supposedly <a href="https://aiforgood.itu.int/about-us/">claims</a> that &#8220;<em>AI for Good remains firmly aligned with the collective priorities of the international community</em>" and &#8220;[&#8230;] <em>it is our responsibility to ensure that no one is left behind</em>&#8221;, to then censor an invited keynote speaker that advocates for confronting difficult issues and engaging in self-reflection, is doubly disheartening.</p></blockquote><h5><strong>&#128204; MAIEI&#8217;s Take and Why It Matters:</strong></h5><p>This incident exemplifies classic <strong>ethics washing</strong>, where organizations use &#8220;social good&#8221; branding like &#8220;AI for Good&#8221; to appear ethically motivated while simultaneously suppressing actual ethical critique. The irony is stark: a summit called &#8220;AI for Good&#8221; censoring a speaker who wanted to address how AI technologies are being used to cause harm.</p><p>As we covered in <a href="https://brief.montrealethics.ai/p/the-ai-ethics-brief-159-misuse-of">Brief #159</a>, where we highlighted <strong>Seher Shafiq&#8217;s</strong> recap of <a href="https://montrealethics.ai/from-funding-crisis-to-ai-misuse-critical-digital-rights-challenges-from-rightscon-2025/">RightsCon 2025</a>, AI systems <em>are</em> being weaponized in conflicts worldwide. In Gaza, AI tools are being used with <a href="https://www.972mag.com/lavender-ai-israeli-army-gaza/">dubious targeting criteria,</a> permitting up to 15-20 civilian deaths per "target," enabled by tech giants like Google and Amazon through <a href="https://www.aljazeera.com/news/2024/4/23/what-is-project-nimbus-and-why-are-google-workers-protesting-israel-deal">Project Nimbus.</a> Similar patterns have emerged with <a href="https://www.washingtonpost.com/opinions/2021/04/12/china-is-using-ai-repress-uyghurs-it-must-stop/">AI-driven persecution of Uyghurs</a> and <a href="https://www.amnesty.org/en/latest/news/2022/09/myanmar-facebooks-systems-promoted-violence-against-rohingya-meta-owes-reparations-new-report/?utm_source=chatgpt.com">Facebook's AI-enabled role in actively promoting violence against Rohingyas.</a></p><p>This follows a familiar playbook across corporate social responsibility initiatives:</p><p><strong>Tech Industry Examples:</strong></p><ul><li><p><a href="https://www.theverge.com/2019/4/4/18296113/google-ai-ethics-board-ends-controversy-kay-coles-james-heritage-foundation">Google's AI Ethics Board (2019)</a> disbanded after only one week due to controversies surrounding the company&#8217;s selection of members.</p></li><li><p><a href="https://www.theverge.com/2021/4/13/22370158/google-ai-ethics-timnit-gebru-margaret-mitchell-firing-reputation">Google's firing of AI ethics researchers</a> Dr. Timnit Gebru (in December 2020) and Dr. Margaret Mitchell (February 2021).</p></li><li><p><a href="https://www.brookings.edu/articles/the-facebook-oversight-boards-failed-decision-distracts-from-lasting-social-media-regulation/">Facebook's Oversight Board</a> was criticized as providing legitimacy while avoiding real accountability</p></li></ul><p><strong>Broader Corporate Ethics Washing:</strong></p><ul><li><p><strong>Greenwashing</strong> at climate summits while continuing harmful practices</p></li><li><p><strong>Diversity initiatives</strong> focused on optics rather than systemic change</p></li><li><p><strong>Corporate social responsibility</strong> programs that deflect from core business harm</p></li></ul><p>The pattern reveals how &#8220;social good&#8221; or &#8220;ethics&#8221; becomes branding rather than genuine accountability. Organizations create ethics initiatives that provide legitimacy, control narratives, deflect criticism, and suppress dissent when real concerns threaten business interests.</p><p>As <strong>Asma Derja</strong> of the <a href="https://www.linkedin.com/posts/asma-derja_they-cut-the-livestream-mid-sentence-a-activity-7349799148755238912-blZ5">Ethical AI Alliance</a> put it bluntly: &#8220;You cannot claim to lead conversations on AI for Good while censoring ethical critique.&#8221;</p><div><hr></div><p><strong>Did we miss anything? Let us know in the comments below.</strong></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://brief.montrealethics.ai/p/the-ai-ethics-brief-170-how-the-us/comments&quot;,&quot;text&quot;:&quot;Leave a comment&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://brief.montrealethics.ai/p/the-ai-ethics-brief-170-how-the-us/comments"><span>Leave a comment</span></a></p><div><hr></div><h3>&#128173; Insights &amp; Perspectives:</h3><div><hr></div><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!bBvf!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!bBvf!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 424w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 848w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1272w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp" width="1456" height="137" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:137,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1704,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:&quot;&quot;,&quot;type&quot;:&quot;image/webp&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://brief.montrealethics.ai/i/159128290?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!bBvf!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 424w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 848w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1272w, https://substackcdn.com/image/fetch/$s_!bBvf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58100851-1e64-428a-8ba3-6aaa803c6ea9_1456x137.webp 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><h4><strong>AI Policy Corner: AI for Good Summit 2025</strong></h4><p>This edition of our <a href="https://montrealethics.ai/category/insights/ai-policy-corner/">AI Policy Corner, </a>produced in partnership with the <a href="https://cla.purdue.edu/academic/polsci/research/labs/grail/index.html">Governance and Responsible AI Lab (GRAIL)</a> at Purdue University, examines conference proceedings from the <a href="https://aiforgood.itu.int/summit25/">UN AI for Good Global Summit 2025.</a> While our story on <em><strong>Behind the 'AI for Good' Brand: When Industry Summits Censor Dissent </strong></em>highlights concerning censorship issues at the summit, our GRAIL partner Alexander Wilhelm attended the event and provides insight into the concrete policy developments that emerged, including the launch of an <a href="https://aiforgood.itu.int/ai-standards-exchange/ai-standards-exchange-database/">AI Standards Exchange Database</a> covering over 700 standards, publication of the <a href="https://www.itu.int/net/epub/SG/SG/2025-UN-Activities-on-AI-Report-2024/index.html#p=1">UN Activities on AI Report</a> identifying 729 AI projects across 53 UN entities, and UNICC's new <a href="https://www.unicc.org/artificial-intelligence/">AI Hub</a> for training UN employees. These initiatives reflect the summit's stated multistakeholder approach, even as questions remain about whose voices are truly welcomed in these &#8220;AI for good&#8221; conversations.</p><blockquote><p><em>To dive deeper, <strong><a href="https://montrealethics.ai/ai-policy-corner-ai-for-good-summit-2025/">read the full article here</a></strong><a href="https://montrealethics.ai/ai-policy-corner-ai-for-good-summit-2025/">.</a></em></p></blockquote><h4><strong>Red Teaming is a Critical Thinking Exercise: Part 4</strong></h4><p>In the <a href="https://avidml.org/blog/red-teaming-4/">final installment of the AVID blog series,</a> the authors bridge strategic and tactical red teaming by introducing critical thinking tools from military doctrine, specifically, <strong>premortems and the Five Whys technique, </strong>to identify organizational vulnerabilities before they manifest. </p><p>Drawing from the U.S. Army's Red Team Handbook, they demonstrate how premortems can help teams anticipate AI system failures by assuming failure from the outset and working backward to identify potential causes, while the Five Whys helps uncover the root purposes and risks of AI initiatives through systematic questioning across different organizational perspectives. The piece emphasizes that tactical-level red teaming should be driven by concerns uncovered through strategic analysis, using technical tools like <a href="https://garak.ai/">Garak,</a> <a href="https://github.com/Azure/PyRIT">PyRIT,</a> and <a href="https://github.com/Azure/counterfit">Counterfit</a> to investigate specific risks rather than conducting unfocused adversarial testing. </p><p>Ultimately, the authors conclude that effective red teaming requires cross-organizational participation and should function as a comprehensive critical thinking exercise that strengthens AI systems through systematic analysis rather than mere technical vulnerability hunting.</p><p>Parts <a href="https://avidml.org/blog/red-teaming-1/">1</a>, <a href="https://avidml.org/blog/red-teaming-2/">2</a> and <a href="https://avidml.org/blog/red-teaming-3/">3</a> can be found here.</p><blockquote><p><em>To dive deeper, <strong><a href="https://avidml.org/blog/red-teaming-4/">read the full article here</a></strong><a href="https://avidml.org/blog/red-teaming-4/">.</a></em></p></blockquote><div><hr></div><h3><strong>&#10084;&#65039; Support Our Work</strong></h3><div><hr></div><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!bzIU!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!bzIU!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png 424w, https://substackcdn.com/image/fetch/$s_!bzIU!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png 848w, https://substackcdn.com/image/fetch/$s_!bzIU!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png 1272w, https://substackcdn.com/image/fetch/$s_!bzIU!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!bzIU!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png" width="1456" height="213" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:213,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://brief.montrealethics.ai/i/156386126?img=https%3A%2F%2Fsubstackcdn.com%2Fimage%2Ffetch%2Ff_auto%2Cq_auto%3Agood%2Cfl_progressive%3Asteep%2Fhttps%253A%252F%252Fsubstack-post-media.s3.amazonaws.com%252Fpublic%252Fimages%252F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!bzIU!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png 424w, https://substackcdn.com/image/fetch/$s_!bzIU!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png 848w, https://substackcdn.com/image/fetch/$s_!bzIU!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png 1272w, https://substackcdn.com/image/fetch/$s_!bzIU!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F093f61c1-be66-46ac-b3d2-b5477e121c7f_2000x293.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p>Please help us keep <em><strong>The AI Ethics Brief</strong></em> free and accessible for everyone by becoming a paid subscriber on Substack or making a donation at <strong><a href="https://montrealethics.ai/donate">montrealethics.ai/donate</a>. </strong>Your support sustains our mission of democratizing AI ethics literacy and honours <strong><a href="https://montrealethics.ai/open-letter-moving-forward-together/">Abhishek Gupta&#8217;s</a></strong> legacy.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://brief.montrealethics.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://brief.montrealethics.ai/subscribe?"><span>Subscribe now</span></a></p><p>For corporate partnerships or larger contributions, please contact us at <strong><a href="mailto:support@montrealethics.ai">support@montrealethics.ai</a></strong></p><div><hr></div><h3>&#9989; Take Action:</h3><div><hr></div><p>Have an article, research paper, or news item we should feature? <strong>Leave us a comment below</strong> &#8212; we&#8217;d love to hear from you!</p>]]></content:encoded></item></channel></rss>