5 Comments
User's avatar
Bryan Caballero @ The Shield's avatar

The Brief raises a critical question about who mediates attention.

But mediation isn’t where risk ends.

It’s where it begins.

Because once mediated information becomes action, the system has already crossed the boundary into consequence.

And that’s where accountability frameworks are still incomplete.

Pino De Francesco's avatar

I disagree with the way you comment on Anthropic’s use of anthropomorphic language. The AI is indeed an autonomous machine, so developers cannot be held responsible for its output. As long as the LLM interface has transparent agentic filters we all agree as safe, the output cannot possibly be the LLM developer responsibility. This goes for any software, not just the AI systems.

noemie's avatar

At this point, doesn't it become a political conversation? Would you say it's the city's responsibility to ensure pedestrian's safety, and if they design a particularly low-visibility street, they bear some weight if anyone gets struck by a car?

Similarly, if I work at an automotive repair shop, and I'm asked to carry heavy weight and I hurt myself, wouldn't it also be the responsibility of my employer to ensure my safety?

I feel very concerned by this tendency to remove the responsibility of institutions towards those they serve, so to me it's evident that LLM developer have a huge share of responsibility, beyond transparency. As long as they are not held responsible, I fear our digital environment will get worse and worse. We cannot expect everyone to have the capacity of comprehending and managing the risks caused by AI, just like we cannot expect everyone to be an expert in any one field (notably thinking of behavioral economics here).

Pino De Francesco's avatar

I'm afraid not. Your examples apply to the agreed guardrails, exactly like the law defines the responsibilities of the employer or the assessment of the local authority competency. Anything that goes beyond the set legal boundary cannot be enforced. Similarly, the developer cannot be held responsible beyond the agreed guardrails. Otherwise you should be able to sue the company that made a tyre that got flat: they cannot possibly be responsible beyond the agreed tyre characteristics you accepted when you decided to have it installed.

User's avatar
Comment deleted
Feb 17
Comment deleted
Lyn Brooks's avatar

I am currently participating in AISPP AI Stewardship Practice Program, focused on advancing “Building Together” Trust & Data Sovereignty attached

National Social Sector application

https://docs.google.com/document/d/1AbiQkwmlZF7hMXS2g9GX6yphYnfTm_ekQvFNtLZu5y4/edit?usp=drivesdk

and, an International Healthcare approach

https://emhicglobal.com/case-studies/building-trust-the-apec-blueprint-for-a-sovereign-ai-powered-mental-health-ecosystem/

Pleased to expand