The AI Ethics Brief #184: What Deserves Your Attention
On critical ignoring, Claude's constitution, and why attention is the scarcest resource in AI.
Welcome to The AI Ethics Brief, a bi-weekly publication by the Montreal AI Ethics Institute. We publish every other Tuesday at 10 AM ET. Follow MAIEI on Bluesky and LinkedIn.
📌 Editor’s Note

In this Edition (TL;DR)
Attention as a Scarce Resource: As AI slop floods digital ecosystems, critical ignoring, choosing what not to engage with, emerges as a missing layer of AI literacy. We explore the research, the education gap, and why trusted curation matters more than ever.
Claude’s Constitution: Anthropic published updated alignment guidelines framed as Claude’s own values. We unpack what this means for transparency, accountability, and trust when AI systems act as information intermediaries.
Tech Futures: Our collaboration with RAIN examines the UN's Independent International Scientific Panel on AI, arguing that rigorous science, not industry narratives, should shape how we understand AI.
AI Policy Corner: Our collaboration with GRAIL at Purdue University explores how AI policy intersects with labour outcomes in the US, with a particular focus on the Healthy Technology Act of 2025.
What connects these stories:
Every piece in this edition asks the same underlying question: who, or what, mediates the relationship between people and the information they act on?
Critical ignoring frames this at the individual level: how people decide what deserves their attention in an environment designed to exploit it. Claude’s Constitution raises the question at the systems level: what happens when the intermediary performing that triage is an AI whose values are presented as its own? The UN Scientific Panel asks it at the institutional level: whose knowledge and experience should shape how we understand AI globally? And the AI Policy Corner asks it at the regulatory level: how much human judgment do we preserve as AI enters professions built on trust?
The thread is consistent: as AI becomes more embedded in how information is produced, filtered, and delivered, the question of who holds editorial authority, and how that authority is designed, communicated, and governed, becomes central to every domain it touches.
Attention as a Scarce Resource
“Slop” was named Merriam-Webster’s 2025 Word of the Year, reflecting growing awareness of low-quality, mass-produced content. The term now captures a broader phenomenon: the rapid proliferation of synthetic media at scale. Across publishing, academic research, and platform ecosystems, output volumes are accelerating. Major machine learning conferences report record growth in submissions. Users increasingly navigate feeds where the origin, intent, and reliability of content range from ambiguous to actively misleading.
Over the past decade, the AI ethics community has also developed frameworks, governance models, and literacy competencies to guide responsible AI development and use. These efforts remain essential. Yet one dimension of AI literacy requires greater attention: the cognitive discipline needed to navigate environments saturated with content designed to capture attention rather than contribute to understanding.
This is where critical ignoring becomes essential.
Defining Critical Ignoring
Critical ignoring is the ability to choose what to ignore and where to focus limited attention. The concept was introduced by Anastasia Kozyreva, Sam Wineburg, and colleagues at the Max Planck Institute for Human Development and Stanford University in a 2023 paper titled Critical Ignoring as a Core Competence for Digital Citizens.
It is not disengagement but rather the disciplined allocation of attention. William James expressed the idea succinctly over a century ago: “The art of being wise is the art of knowing what to overlook.”
The framework identifies three strategies:
Self-nudging, redesigning one’s digital environment to reduce exposure to distractions and manipulative triggers.
Lateral reading, leaving a source to verify credibility before investing further attention.
The do-not-feed-the-trolls heuristic, recognizing that engagement can amplify bad-faith actors.
These strategies were articulated before generative AI became mainstream. Their relevance has intensified as content production costs approach zero and algorithmic systems reward engagement over discernment.
When Volume Becomes Distortion
AI-generated slop introduces a structural shift in the information environment. Much of this content is designed to occupy space, capture clicks, and satisfy ranking algorithms. In engagement-driven systems, distribution is often shaped more by responsiveness to algorithmic incentives than by epistemic quality. Whether content is accurate can become secondary to whether it performs.
As such, context matters. As we explored in Brief #165, AI-generated content in high-stakes contexts, such as active conflicts or political campaigns, operates differently, filling information vacuums with fabricated narratives that shape public understanding before credible sources can respond. In these environments, indifference to truth becomes a mechanism of distortion. The dynamics of attention and deception are therefore intertwined, existing on a continuum rather than as separate categories.
This genre of content is marked by superficial competence, asymmetric effort, and mass producibility. The cumulative effect is a declining signal-to-noise ratio across digital ecosystems. From academic publishing to video platforms, output is accelerating while systems for assessing quality struggle to keep pace. Platforms have introduced tools that allow users to limit certain forms of AI-generated content in their feeds, signalling recognition that quality management has become a governance challenge.
In such environments, critical thinking alone is insufficient. Research on online reasoning shows that sustained engagement with low-quality material can increase its visibility within algorithmic systems. When attention is monetized and amplified by design, engagement becomes an extractable resource.
Critical ignoring complements critical thinking by adding a prior step: deciding whether engagement is warranted at all. It operates at the cognitive level, equipping individuals to navigate systems that continue to reward engagement, whether or not that engagement advances understanding.
The Missing Layer in AI Literacy
This raises a broader educational question. A systematic literature review by Daniel Schiff and colleagues at Purdue University analyzed 25 empirical AI ethics education interventions between 2018 and early 2023.
The findings are encouraging. AI ethics education has adopted progressive pedagogical approaches, including case studies, group projects, hands-on exercises, and interdisciplinary discussions. Content extends beyond narrow technical compliance to include bias, fairness, privacy, power, social justice, and inequality, reflecting the sociotechnical nature of AI systems.
Yet, assessment practices lag behind. Only three of the 25 interventions explicitly defined learning objectives. Many programs relied on summative evaluations or research-oriented measurement rather than formative assessment designed to support student learning. The authors warn against valuing what is easily measured instead of measuring what truly matters.
If AI ethics education is still developing robust ways to assess ethical reasoning, how do we assess the capacity to manage attention responsibly in AI-saturated environments?
Ethical reasoning presumes that learners have already selected a problem worthy of deliberation. Critical ignoring addresses the prior cognitive task of triage: how individuals decide which information merits sustained reasoning in the first place.
When individuals cannot reliably perform that triage, they turn to trusted intermediaries to perform it for them.
Trust as Cognitive Shortcut
As content volume increases, trusted curation becomes more valuable. Reputation functions as a cognitive shortcut, with readers relying on institutional or brand credibility to decide where to allocate attention.
This reinforces the importance of human editorial judgment. The value of a publication lies not only in the information presented but in the decisions about what to include, how to contextualize it, and what to exclude. In algorithmically amplified environments, intentional curation stabilizes attention.
As AI reduces the marginal cost of producing fluent text, discernment grows in importance. The ability to generate coherent content is not the same as having something worth saying. Editorial intention is what distinguishes analysis from noise.
Building Critical Ignoring into AI Literacy
The convergence of cognitive research on attention and empirical research on AI ethics education suggests several priorities:
AI literacy frameworks should explicitly incorporate attention management as a metacognitive skill, not just understanding, evaluating, and reasoning about AI systems, but navigating engagement with them.
Assessment practices need to evolve alongside this, with scenario-based exercises where learners demonstrate the ability to triage information quality, verify sources efficiently, and justify decisions not to engage.
Educators and policymakers should recognize that attention is a limited resource with ethical implications: engagement choices shape algorithmic systems, influence public discourse, and affect the visibility of ideas.
Critical ignoring does not imply withdrawal from civic life. It establishes the conditions for meaningful engagement. Individuals who attempt to engage with everything risk dispersing their cognitive resources across content designed to fragment attention. This is a practice exercised at the individual level, within systems that are structurally optimized for the opposite outcome.
The discipline the AI era demands is not greater exposure to information. It is a wiser allocation of attention.
Claude's Constitution: When the Intermediary Sets Its Own Terms
In late January, Anthropic published an updated set of guidelines for the values and behaviours on which Claude is trained, positioning it as their effort to keep Claude safe for its 'principals' (Anthropic, operators, and users), and safeguard Claude's 'wellbeing.'
Rather than a prescriptive rulebook, Anthropic opted for a judgment-oriented approach to alignment: the constitution describes what good judgment looks like and trains the model to generalize from there. Instead of telling Claude what to do in every scenario, you show it how an 'ethical' person would act. The document is written to and for Claude, as if the model had lost all its memory and needed to be reminded of its identity.
For readers seeking a deeper treatment of how safety, alignment, and ethics relate to and diverge from one another, Renée Sieber's chapter in SAIER Volume 7 offers a useful framework for disentangling these AI concepts as they evolve across institutional and national contexts.
Claude's actions are measured against four ethical values: being "broadly safe," "broadly ethical," compliant with Anthropic's guidelines, and "genuinely helpful." When these values conflict, safety takes priority. The constitution also provides a level of transparency that may reduce adoption risk for EU companies as the EU AI Act's transparency requirements take effect.
📌 MAIEI’s Take
That transparency is welcome. The constitution provides users with tangible insight into the design thinking behind the model's behaviour, and the judgment-oriented approach enables Claude to respond to novel scenarios with a more applicable framework rather than searching a rulebook for the relevant clause.
Anthropic’s approach aligns closely with virtue ethics: rather than determining right or wrong based on universal rules (deontology), it guides the model toward how an “ethical” person would act in a given situation. The operative phrase is how they think an ethical person would act. There is no universal “ethical actions” dataset that an LLM can train on. The values Claude is optimized toward are, inevitably, the values Anthropic holds, whether or not they are fully intended. This is not a flaw unique to Anthropic, but it is a reality the constitution’s framing tends to obscure rather than surface.
The implications extend beyond philosophical framing. If critical ignoring depends on individuals identifying trustworthy sources to perform information triage on their behalf, alignment documents become part of that trust infrastructure. Users increasingly interact with AI systems not just as tools but as intermediaries that filter, summarize, and contextualize information. How those intermediaries are designed and how their design is communicated matter.
Anthropic’s use of anthropomorphic language also goes beyond explanatory convenience. Claude is described as “a brilliant friend” and a “conscientious objector,” attributing layers of agency to the model that distance its developers from responsibility for its outputs. Rather than qualifying the question of machine consciousness, the constitution frames it as something that may already exist in some form. When a model's values are presented as its own rather than as design choices of a specific company, users lose a critical layer of context for evaluating what they're being told.
This is compounded by how the framework is evaluated. Anthropic acknowledges that part of the feedback on the current framework comes from older Claude models. When the system evaluating the alignment framework is itself a product of prior alignment decisions, the feedback loop narrows rather than broadens.
None of this diminishes the value of the effort. Documents like these are valuable and necessary. The AI ethics community has long called for greater transparency in how models are designed, and Anthropic's constitution is a meaningful contribution. At the same time, presenting alignment as the model's own values rather than the product of specific human decisions risks obscuring the accountability that transparency is meant to enable. For a model environment that truly embodies AI safety, the human touch must be acknowledged and accounted for, not displaced onto the model itself.
Did we miss anything? Please share your thoughts with the MAIEI community:
💭 Insights & Perspectives:
Tech Futures: Diversity of Thought and Experience: The UN’s Scientific Panel on AI
This article is part of our Tech Futures series, a collaboration between the Montreal AI Ethics Institute (MAIEI) and the Responsible Artificial Intelligence Network (RAIN). The series challenges mainstream AI narratives, proposing that rigorous research and science are better sources of information about AI than industry leaders. This second instalment of Tech Futures by RAIN celebrates the great potential of the UN’s Independent International Scientific Panel on AI, and the diversity of its membership.
To dive deeper, read the full article here.
AI Policy Corner: Automating Licensed Professions: Assessing Health Technology and Other Industries
This edition of our AI Policy Corner, produced in partnership with the Governance and Responsible AI Lab (GRAIL) at Purdue University, explores how AI policy may affect labour outcomes in the United States by analyzing state and federal policies and relevant court opinions. Proposals that promote AI safety and support industry growth are common, but a recurring question is how much human oversight will be required. The Healthy Technology Act of 2025 shows how these questions about oversight show up in practice.
To dive deeper, read the full article here.
❤️ Support Our Work
Help us keep The AI Ethics Brief free and accessible for everyone by becoming a paid subscriber on Substack or making a donation at montrealethics.ai/donate. Your support sustains our mission of democratizing AI ethics literacy and honours Abhishek Gupta’s legacy.
For corporate partnerships or larger contributions, please contact us at support@montrealethics.ai
✅ Take Action:
Have an article, research paper, or news item we should feature? Leave us a comment below — we’d love to hear from you!




I disagree with the way you comment on Anthropic’s use of anthropomorphic language. The AI is indeed an autonomous machine, so developers cannot be held responsible for its output. As long as the LLM interface has transparent agentic filters we all agree as safe, the output cannot possibly be the LLM developer responsibility. This goes for any software, not just the AI systems.