We further explore our State of AI Ethics Report with Part II: Social Justice & Equity, while highlighting the latest addition to our AI Policy Corner on Ukraine's AI regulation whitepaper
The focus on grassroots movements forming in response to top-down governance is particularly timely. Seeing initiatives like Africa Check and Civic Searchlight emerge suggests that the narrative around AI is shifting from pure techno-optimism to something more grounded in community needs. The enviromental impact data you included about data centres is sobering, it makes the material costs of these systems imposible to ignore.
Jonathan Tallant (University of Nottingham) and I just published a paper entitled "Trustability and Trustworthiness: Conceptual Foundations and the Case of AI." In the paper, we distinguish trustworthiness from "trustability" and use this distinction to determine when AI systems can be considered genuine candidates for human trust.
Hi there... I read it through and its VERY enlightenning in many respects. My friends call me "a compulsive optimist" and they are right. From my perspective here, Human-AI interactions are the epilogue of something I call: the outsoursing of the self. Last chapter was social media. But I hope -from my compulsive optimism- that AI will push the whole post-truth era to eventually collapse. And I say this because of the astonishing statistics of depression, anxiety, loss of purpose, loneliness, etc. supporting the "meaning crisis". I do not believe AI will lower these symptoms but quite the opossite: there is a limit to over-thinking or, better, over-feeding our brains with information we can only expect to be true or not: suspecting we will never truly know. It seems impossible to regulate AIs (agree with PsAIcopaths), but might be possible that they just fed-up human beings with information and have them look for "meaningful" experiences... just to take a break. Hopefully consistent break.
The focus on grassroots movements forming in response to top-down governance is particularly timely. Seeing initiatives like Africa Check and Civic Searchlight emerge suggests that the narrative around AI is shifting from pure techno-optimism to something more grounded in community needs. The enviromental impact data you included about data centres is sobering, it makes the material costs of these systems imposible to ignore.
Thanks for an informative read. I recently co-authored a research paper that I think would be of interest to this newsletter and its readers: https://www.researchgate.net/publication/396328369_The_Quiet_Displacement_of_Social_Values_in_AI_Policy#fullTextFileContent
I think you nailed the problem.
People like you who are the problem.
PsAIcopaths.
https://fritzfreud.substack.com/p/psaicopaths-aicopaths-aideologists
Hello,
Jonathan Tallant (University of Nottingham) and I just published a paper entitled "Trustability and Trustworthiness: Conceptual Foundations and the Case of AI." In the paper, we distinguish trustworthiness from "trustability" and use this distinction to determine when AI systems can be considered genuine candidates for human trust.
You or your readers may find it interesting. The paper is accessible here: https://doi.org/10.1007/s43681-025-00839-w
If needed, I can also send you the PDF.
Best wishes,
Romaric
Hi there... I read it through and its VERY enlightenning in many respects. My friends call me "a compulsive optimist" and they are right. From my perspective here, Human-AI interactions are the epilogue of something I call: the outsoursing of the self. Last chapter was social media. But I hope -from my compulsive optimism- that AI will push the whole post-truth era to eventually collapse. And I say this because of the astonishing statistics of depression, anxiety, loss of purpose, loneliness, etc. supporting the "meaning crisis". I do not believe AI will lower these symptoms but quite the opossite: there is a limit to over-thinking or, better, over-feeding our brains with information we can only expect to be true or not: suspecting we will never truly know. It seems impossible to regulate AIs (agree with PsAIcopaths), but might be possible that they just fed-up human beings with information and have them look for "meaningful" experiences... just to take a break. Hopefully consistent break.
BULLSHIT
AI ethics?
My Arse.
You can't rationalize a technology that by design is unethical and unsustainable.
https://fritzfreud.substack.com/p/psaicopaths-aicopaths-aideologists
So you want to regulate AI?
How...
Please explain.
Show us how stupid you are!