I disagree with the way you comment on Anthropic’s use of anthropomorphic language. The AI is indeed an autonomous machine, so developers cannot be held responsible for its output. As long as the LLM interface has transparent agentic filters we all agree as safe, the output cannot possibly be the LLM developer responsibility. This goes for any software, not just the AI systems.
I disagree with the way you comment on Anthropic’s use of anthropomorphic language. The AI is indeed an autonomous machine, so developers cannot be held responsible for its output. As long as the LLM interface has transparent agentic filters we all agree as safe, the output cannot possibly be the LLM developer responsibility. This goes for any software, not just the AI systems.
I am currently participating in AISPP AI Stewardship Practice Program, focused on advancing “Building Together” Trust & Data Sovereignty attached
National Social Sector application
https://docs.google.com/document/d/1AbiQkwmlZF7hMXS2g9GX6yphYnfTm_ekQvFNtLZu5y4/edit?usp=drivesdk
and, an International Healthcare approach
https://emhicglobal.com/case-studies/building-trust-the-apec-blueprint-for-a-sovereign-ai-powered-mental-health-ecosystem/
Pleased to expand