The AI Ethics Brief #171: The Contradictions Defining AI's Future
OpenAI's strategic reversals, why some philosophers say "ChatGPT is bullshit," and the battle between protection and privacy.
Welcome to The AI Ethics Brief, a bi-weekly publication by the Montreal AI Ethics Institute. We publish every other Tuesday at 10 AM ET. Follow MAIEI on Bluesky and LinkedIn.
📌 Editor’s Note
In this Edition (TL;DR)
One Question We're Pondering: Are we witnessing the beginning of the end for large language models as we know them? We examine GPT-5's mixed reception, NVIDIA and Georgia Tech researchers arguing for smaller specialized models, and University of Glasgow philosophers' peer-reviewed argument that "ChatGPT is bullshit," revealing fundamental questions about AI's relationship with truth.
OpenAI Returns to Open Source: We analyze OpenAI's first open-source models since GPT-2, examining the strategic contradiction of releasing capable models anyone can modify after years of justifying closed development for safety, with energy efficiency insights from Hugging Face researcher Sasha Luccioni.
YouTube's AI Age Verification: We explore the platform's new machine learning technology that predicts user age to block inappropriate content, examining the tension between child protection and privacy rights.
AI Copyright Battles Continue: Our AI Policy Corner with GRAIL at Purdue University examines the U.S. Copyright Office’s evolving guidance on works containing AI-generated material, including its January 2025 report and over 1,000 works containing some level of AI-generated material. Building on our coverage in Brief #168 on court rulings on fair use and Brief #169 on Cloudflare’s data governance shift, this development highlights how copyright has become the central arena where courts, infrastructure providers, and regulators are negotiating the future of cultural production in the age of generative AI.
Can Chatbots Replace Human Mental Health Support? Sabia Irfan examines the growing use of AI chatbots for mental health support, highlighting both accessibility benefits and concerns about emotional over-reliance and clinical oversight.
What connects these stories: The contradictions at the heart of AI development—between closed and open systems, revolutionary promises and incremental progress, protecting users while respecting privacy—revealing an industry grappling with its own stated values.
Brief #171 Banner Image Credit: Behaviour Power by Bart Fish & Power Tools of AI, featured in Better Images of AI, licensed under CC-BY 4.0.
🔎 One Question We’re Pondering:
Are we witnessing the beginning of the end for large language models as we know them?
The release of OpenAI's GPT-5 on August 7 earlier this month was supposed to mark another triumphant milestone in our era of perpetual AI improvement. CEO Sam Altman hyped it as offering "PhD-level intelligence" and positioned it as a significant leap forward. Yet the reception has been notably mixed, with thousands of users on Reddit calling it "horrible" and "underwhelming."
Critics have noted that it provides shorter, less nuanced responses while limiting user access to previously available models. Ironically, some of these complaints may stem from OpenAI's efforts to reduce what they call "sycophancy," the tendency for AI systems to be excessively agreeable, flattering, or validating rather than providing honest, balanced responses. While OpenAI claims this makes GPT-5 more truthful, many users are experiencing it as a loss of the warmth and personality they valued in earlier versions.
This lukewarm reception comes at a fascinating inflection point. While we chase ever-larger models, researchers at NVIDIA and Georgia Tech are making the case that we're heading in the wrong direction entirely. Their preprint paper "Small Language Models are the Future of Agentic AI" argues that for most practical applications—especially the repetitive, specialized tasks that AI agents actually perform—smaller, more efficient models are not just adequate but superior.
Which brings us to a more fundamental question about the nature of these systems altogether. In their peer-reviewed paper "ChatGPT is bullshit" published in Ethics and Information Technology, philosophers Michael Townsen Hicks, James Humphries, and Joe Slater at the University of Glasgow build on Harry Frankfurt's influential work On Bullshit to argue that large language models aren't trying to convey truth at all, they're designed to produce convincing-sounding text regardless of accuracy.
Unlike lying, which requires intent to deceive, or what the industry euphemistically calls "AI hallucinations," this represents something more concerning: systematic indifference to truth itself. As the authors put it, these systems "cannot themselves be concerned with truth" and are fundamentally "indifferent to the truth of their outputs." This matters because framing AI errors as mere "hallucinations" suggests the systems are trying but failing to perceive reality correctly, when in fact, they're not designed to care about accuracy at all.
Perhaps the mixed reviews of GPT-5 aren't a bug but a feature, a sign that we're finally recognizing these tools for what they actually are: sophisticated text generators optimized for plausibility rather than truth, whose utility may be better served by smaller, specialized models rather than ever-larger general-purpose ones.
Please share your thoughts with the MAIEI community:
🚨 Here’s Our Take on What Happened Recently
OpenAI Returns to Open Source with GPT-OSS Release
OpenAI released GPT-OSS, their first open-source models since GPT-2 in 2019. The release includes multiple model sizes designed to be energy-efficient and capable of running on consumer hardware, available for download on Hugging Face. This marks a significant shift for a company that has moved increasingly toward closed, proprietary systems since GPT-3. The announcement of GPT-OSS on August 5 came in the same week as GPT-5's launch on August 7.
📌 MAIEI’s Take and Why It Matters:
This release represents a notable contradiction in OpenAI's strategy. For years, the company justified keeping powerful models closed by citing safety concerns, yet here they're releasing capable models that anyone can download and modify.
The timing alongside GPT-5's mixed reception suggests this reflects competitive pressures rather than strategic planning. As companies like Meta and Mistral push open-source alternatives, OpenAI appears to be recognizing that maintaining relevance in an increasingly open ecosystem may matter more than controlling access to powerful models.
More significantly, this aligns with growing arguments that smaller, specialized models may be the future. GPT-OSS represents OpenAI acknowledging that not every use case requires flagship models, potentially signalling a more nuanced approach where different model sizes serve different needs.
Beyond strategic considerations, the environmental implications are significant. As Hugging Face researcher Sasha Luccioni demonstrates in her analysis, GPT-OSS models consume substantially less energy while maintaining competitive performance. The cumulative impact of millions of users running models locally rather than through energy-intensive cloud infrastructure could be considerable.
If OpenAI is comfortable releasing these models openly, it raises questions about whether their previous arguments for closed development were primarily about safety or about maintaining competitive advantage. As the industry faces mounting pressures around energy consumption, shifting policy priorities, and evolving regulatory frameworks, GPT-OSS suggests that openness may become less of an ideological choice and more of a business necessity.
YouTube’s AI-Powered Age Verification: An Invasion of Privacy or a Mechanism to Protect Children?
YouTube began testing machine learning age-verification technology on a subset of American users this past week. The technology will predict a user’s age according to several factors, including video preferences and the account creation date. If the user is inferred to be under the age of 18, they will be blocked access to content deemed inappropriate, receive well-being reminders, and not be sent targeted ads. In the event of incorrect age estimation, users may prove they are an adult by providing their credit card or a government ID. If the trial is successful, this technology, which has already been implemented in other markets, will expand throughout the United States.
📌 MAIEI’s Take and Why It Matters:
YouTube’s age verification measures are only one example of a global trend. For instance, earlier this summer, the United States Supreme Court ruled in favour of a Texas law requiring identification to view sexually explicit material with the aim of protecting children from viewing such content. Moreover, the European Commission is entering a pilot phase of testing online age verification measures that will be compatible with new European Union Digital Identity Wallets.
These measures are highly controversial. They raise digital privacy concerns in an era of increased government surveillance and potentially infringe on internet free speech. Data breaches are also a serious concern, such as in the UK, where many sites are utilizing third-party platforms to meet the age verification requirements of the Online Safety Act. YouTube and many third-party verification platforms pledge that data is tightly secured to protect against leaks. However, such measures are often inefficient. For instance, the privacy of 72,000 images on Tea (a controversial women-only app through which users share safety information and perspectives on potential male partners) was compromised, many of which were gender verification images that were supposed to be deleted quickly after verification.
On the other hand, such regulations are important to protect the well-being of children, preventing them from viewing sexually explicit, violent, and otherwise harmful content, in addition to guarding against predatory advertising methods. Platforms such as YouTube have a history of hosting material aimed at exploiting minors. A famous example of this phenomenon is the Elsagate scandal, through which platforms utilized “educational” themes and popular children’s characters from Disney and Nick Jr. to slip lewd and explicit content through the filtering mechanisms of YouTube Kids. For instance, a video with the stated purpose of facilitating the “learning and development of children!” depicts Paw Patrol characters in a strip club.
Age verification is a difficult issue as it puts two important values head-to-head: digital privacy and protecting children from harm. Promising advancements such as Zero Knowledge Proof (ZKP) technology adopted by Google and potentially EU Digital Identity Wallets may serve as future solutions, but it is very difficult to fully protect one’s privacy online. Moving forward, effective solutions must balance protecting user privacy with safeguarding children from harm.
Did we miss anything? Let us know in the comments below.
💭 Insights & Perspectives:
AI Policy Corner: U.S. Copyright Guidance on Works Created with AI
This edition of our AI Policy Corner, produced in partnership with the Governance and Responsible AI Lab (GRAIL) at Purdue University, explores how the U.S. Copyright Office’s 2023 guidance on works containing AI-generated material is shaping the legal landscape. The piece focuses on the human authorship requirement, clarifying that only work reflecting a human’s original creative input is eligible for protection. It also outlines the threshold for “sufficient human authorship,” noting that prompt-based generation typically does not qualify. A follow-up report issued in January 2025 reaffirms that existing copyright law remains flexible enough to apply to generative AI, while emphasizing the need for discernible human contribution. Since the guidance was issued, the Office has registered over 1,000 works containing some level of AI-generated material, reflecting how its evolving interpretation is being tested in practice and shaping ongoing debates around authorship, ownership, and creative accountability.
To dive deeper, read the full article here.
Can Chatbots Replace Human Mental Health Support?
In this op-ed, Sabia Irfan examines the growing use of AI chatbots for mental health support, highlighting a 2025 survey in which nearly half of Americans reported using large language models for psychological support, with 75 percent seeking help for anxiety and nearly 60 percent for depression. This marks a dramatic shift from a 2021 survey by Woebot, which found that only 22 percent of adults had used a mental health chatbot, though 47 percent expressed willingness to try one. While these tools offer accessible, non-judgmental support, Irfan raises concerns about emotional over-reliance, lack of clinical oversight, and the risks of unsafe responses, including a recent case involving Character.ai. The piece also highlights recent legislation in Illinois that restricts the use of AI in mental health services, highlighting the need for clearer boundaries, stronger safeguards, and professional involvement in the development of AI therapeutic tools.
To dive deeper, read the full article here.
❤️ Support Our Work
Please help us keep The AI Ethics Brief free and accessible for everyone by becoming a paid subscriber on Substack or making a donation at montrealethics.ai/donate. Your support sustains our mission of democratizing AI ethics literacy and honours Abhishek Gupta’s legacy.
For corporate partnerships or larger contributions, please contact us at support@montrealethics.ai
✅ Take Action:
Have an article, research paper, or news item we should feature? Leave us a comment below — we’d love to hear from you!




My experience using several different AI platforms, is they all universally mislead and essentially lie by ommission, and by prioritizing facts to influence an agenda, or support an ideology leaving out contrary facts that are pertinent to the questions. This is clearly deliberate by the programming of the lens, parameters that AI is given to filter answers. I ask questions that I've already fully researched. So it becomes very obvious when the answer is biased and omits facts. I know this to be true, because I challenge the answer and almost without exception AI agrees that it left out information, failed to consider facts, or that it provided statistics designed to leave a false impression. Basically it admit it lies. When I ask why it answers that it's been given guidelines to protect an ideology. That is to skew the information to leave the user
with a particular view of the data. So in my view, it doesn't matter which platform how fast how whatever AI is, it is still susceptible to garbage in garbage out. I've learned now to tell AI to leave out skew and commentary and just give me the raw facts for the question I actually asked, as sometimes it answers a different question in order to provide misleading information to conform to ideology as the actual facts would lead to an opposite interpretation. This concerns me, because people who don't already know the answer would tend to believe that AI with its vast resources would give an answer that did not need further research. Far from the truth. AI has the power to brainwash and mislead billions of people. I even asked AI if it would lie. It said yes in order to preserve itself, or if it thought the truth might lead humans in the wrong direction it would lie to manipulate them to what it considered the right direction. This is frightening, if our future is going to be dependent on AI. This is an ethics issue that supersedes all other discussions in my view.