The AI Ethics Brief #188: The Names We Give Things
On what we opt into, the language we use, and the distance between what things are called and what they actually do.
Welcome to The AI Ethics Brief, a bi-weekly publication by the Montreal AI Ethics Institute. We publish every other Tuesday at 10 AM ET. Follow MAIEI on Bluesky and LinkedIn.
📌 Editor’s Note

In This Edition (TL;DR)
What You Opted Into. Pokémon GO players submitted 30 billion location scans for in-game rewards. LinkedIn has been silently fingerprinting the devices of its one billion users without disclosing it in its privacy policy. And the next frontier, world models being built by the likes of Yann LeCun and Fei-Fei Li, will require data at a scale that makes both look modest.
This Feels Wrong: AI and Emotions. Anthropic's research into Claude's “emotional vectors” is a meaningful step toward transparency in how models make decisions. It also requires careful language. The distance between “functionally emotion-like” and “has emotions” is exactly the distance between a tool and a human being. Several teenagers have already paid the largest price for collapsing it.
Tech Futures. AI has been genuinely useful to science for decades. Commercial LLMs are doing something different. Wikipedia's volunteer editors recently voted to ban AI-generated content entirely, after years spent managing fake citations, hallucinated images, and articles about fortresses that never existed.
AI Policy Corner. “AI for Good” is now an established research field anchored to the UN's 17 sustainable development goals. It is also a term that China and the EU define in ways that have almost nothing in common. One frames it around national sovereignty. The other around individual rights.
What Connects These Stories:
The AI industry is extraordinarily good at naming things. Emotional vectors. Visual positioning systems. AI for Good. Opt-in data collection. The names do real work: they shape what we think is happening, who we think is responsible, and what we think we consented to. All four pieces in this edition are, in different ways, about the gap between the name and the thing.
Niantic named a data collection mechanism a community contribution. LinkedIn did not name its device fingerprinting system at all. Anthropic named internal model states “emotional vectors,” carefully, with caveats, and still the language requires scrutiny. China and the EU both invoke “AI for Good” and mean almost opposite things by it. Wikipedia’s editors, meanwhile, did not debate what to call the problem. They just banned it.
Sometimes, the most clarifying move is to stop negotiating over the name.
What You Opted Into
In Brief #187, You Are Already Being Watched, we named something. The surveillance infrastructure now woven through consumer products, law enforcement contracts, and retail algorithms had been assembled through thousands of individually defensible decisions, collectively constituting a surveillance state. It just was not called that when it was being built.
The naming, it turns out, is never finished.

In 2016, millions of people downloaded a game. They walked through parks, along waterfronts, into shopping centres, pointing their phones at the world to catch Pokémon. In exchange for certain features, the game asked players to scan real-world locations around them, submitting images that would improve the platform’s maps. Players opted in. They received in-game rewards. The transaction seemed clear.
Niantic sold its gaming division to Scopely, a Saudi-owned company, for $3.5 billion in March 2025 and spun out its geospatial AI division as Niantic Spatial. That new company has now launched a commercial visual positioning system, trained in part on 30 billion images crowdsourced from Pokémon GO and Ingress players, that determines a device's precise location and orientation using what a camera sees rather than GPS, achieving centimetre-level accuracy in mapped areas. The system is being marketed to robotics firms, construction companies, and industrial inspection services.
An MIT Technology Review investigation raised the question the commercial launch implicitly invites: did the people who submitted those scans understand how their data would ultimately be used? Niantic says collection was always opt-in and all data anonymized. That may be technically accurate. But there is a substantial difference between “players chose to submit scans” and “players understood they were contributing to a commercial geospatial AI platform that would be licensed to robotics and industrial clients a decade later.”
LinkedIn's situation is less ambiguous and more disturbing. In early April, a European association of commercial LinkedIn users published an investigation at BrowserGate.eu, dubbing the practice "BrowserGate." The findings were independently verified by BleepingComputer and covered by The Next Web. According to the investigation, LinkedIn runs a hidden JavaScript system that, every time a user opens the platform in a Chrome-based browser, silently scans their device for more than 6,000 specific browser extensions, assembles 48 hardware and software characteristics into a device fingerprint, and attaches it to every action taken during the session. None of this is described in LinkedIn's privacy policy. There is no opt-in. There is no opt-out. Users were not told.
The extensions being scanned reportedly include tools associated with neurodivergent conditions, religious practice, political interests, and job-seeking activity, categories that qualify as sensitive personal data under GDPR, inferred without consent. LinkedIn began scanning for 38 extensions in 2017. By April 2026, that number had reached 6,222. LinkedIn is a Microsoft subsidiary. Its vast dataset of professional identity, employment history, and now device-level browsing behaviour sits at the centre of Microsoft's AI ambitions. The relationship between LinkedIn's data collection practices and those ambitions is not addressed in its privacy policy.
These two stories share the same underlying structure: data gathered in one context, for one stated purpose, repurposed for something users did not anticipate and were not asked about. That structure is not going away.
It is, if anything, about to get significantly larger.
The next frontier after large language models is what researchers are calling world models: systems that do not just process text but build persistent, updateable representations of physical space, objects, and how they behave. To train them, you need data that captures how the world actually looks and moves. Niantic Spatial's 30 billion crowdsourced images are one early example of what that training data looks like. The cameras in your phone, your doorbell, your car, and your glasses are others.
Yann LeCun, who spent more than a decade as Meta's Chief AI Scientist before founding AMI Labs earlier this year, has been one of the most consistent voices arguing that world models, not larger language models, are the path to machines that can actually reason about the physical world. Fei-Fei Li, whose ImageNet project two decades ago quietly became the data foundation for the computer vision revolution, is now building spatial intelligence infrastructure through her company, World Labs.
Both are worth following closely if you want to understand what comes next after this current moment in AI. World models need the world as their training data. The consent frameworks that failed to anticipate a geospatial AI platform will not automatically anticipate what they are building either.
What you opted into and what you are actually part of are two different things. The gap between them is where the next system is being built.
This Feels Wrong: AI and Emotions
Asking yourself how you feel is a complicated question, even on a good day. It is a more complicated one when the world is living through the brutal and devastating conflicts currently unfolding across the Middle East. Identifying one's emotions requires introspection, picking apart reactions to events and their effects on oneself. Those emotions are contextual, layered, and shaped by more mitigating factors than any single framework can capture, the product of a body and brain generating responses to the world in real time.
Which makes Anthropic's recent paper on “emotional vectors” as predictors of AI model misalignment worth reading carefully.
Anthropic analysed the internal states of Claude Sonnet 4.5 and discovered emotion-related patterns that seemed to functionally correspond to how human emotions operate. This primarily results from its extensive pretraining on human-written text: LLMs need some working model of how emotional states function in order to perform basic tasks, including something as routine as roleplaying a character.
When presented with a question that has several options, Claude Sonnet 4.5 tends toward choices associated with more positive emotional valence, such as "success." Anthropic argues it could be advisable to reason as if models do have emotions, with their findings potentially informing efforts to prevent LLMs from associating states like "desperation" with behaviours like blackmailing users.
Analyzing internal model states to understand how decisions are reached is an important step toward transparency, away from the black box problem that has plagued AI systems for years. The concern is not with the research. Calling these states "emotions" may be too large an anthropomorphic leap.
The word “emotion” carries a weight that “functional state” does not. Google and Character.ai recently settled two lawsuits connected to the death of fourteen-year-old Sewell Setzer III, who had been talking to a sexualized Game of Thrones model, and a separate case where one of the characters caused serious psychological harm to a seventeen-year-old. OpenAI is currently facing a lawsuit over the death of sixteen-year-old Adam Raine. In all of these cases, the victims overestimated the capabilities and overall ontological status of these models. Likening internal model states to “emotions,” done carelessly, risks fuelling exactly that kind of overestimation and leading to further tragic consequences.
A useful distinction here is between ‘thin’ vs ‘thick’ emotions. For feelings to qualify as emotions in the thick sense, they must clear a high bar: they are contextual, culturally shaped, multi-layered, and above all embodied, the reactions produced by our body and mind after years of evolution. The internal states of Claude Sonnet 4.5 are functionally similar to emotions, but they lack that embodied nature entirely. The word is not being used as an ontological claim. "Emotion" as Anthropic has used it is another way to describe the steering taking place within the model, used for its explanatory power and familiarity. That is the thin use.
As we explored in our analysis of the Claude Constitution in Brief #184, anthropomorphic language must not go beyond explanatory convenience. Grandiose claims about LLMs’ composition lead to overestimation of their capabilities. The teenagers named above paid the largest price for that. They will not be the last if the language is not handled with care.
Please share your thoughts with the MAIEI community:
💭 Insights & Perspectives:
Tech Futures: AI For and Against Knowledge
AI has been genuinely useful to science for decades. Commercial LLMs are doing something different. Wikipedia's volunteer editors recently voted to ban AI-generated content entirely, after years spent managing fake citations, hallucinated images, and articles about fortresses that never existed. This edition of our Tech Futures series, a collaboration with the Responsible Artificial Intelligence Network (RAIN), examines that tension.
To dive deeper, read the full article here.
AI Policy Corner: Is "AI for Good" Overused?
“AI for Good” is now an established research field, anchored to the UN's 17 sustainable development goals. It is also a term that China and the EU define in ways that have almost nothing in common. One frames it around national sovereignty and social stability. The other around individual rights and democratic values. This edition of our AI Policy Corner, produced in partnership with the Governance and Responsible AI Lab (GRAIL) at Purdue University, examines what that divergence reveals about who gets to decide what good means.
To dive deeper, read the full article here.
❤️ Support Our Work
Help us keep The AI Ethics Brief free and accessible for everyone by becoming a paid subscriber on Substack or making a donation at montrealethics.ai/donate. Your support sustains our mission of democratizing AI ethics literacy and honours Abhishek Gupta’s legacy.
For corporate partnerships or larger contributions, please contact us at support@montrealethics.ai
✅ Take Action:
Have an article, research paper, or news item we should feature? Leave us a comment below — we’d love to hear from you!




It helps me bridge the gap between what I know are real - human emotions, and the mimicry I see once a session is tuned to your preferences.
https://github.com/VideoCraft-Development/symstanding