How can we quantify the return on investment (ROI) or measure the success of integrating external stakeholder feedback into the AI development process?
The Return on Investment in AI Ethics: A Holistic Framework published in the Hawaii International Conference on System Sciences (HICSS) 2024 Proceedings
From our point of view, 2024 stands up to be the year for AI Ethics, talking about it and raising awareness is the first step to establish the perfect conditions for a Responsible AI environment. Bringing it from very technical or institutional areas to other aspects of our daily lives, such as digital commerce, society, education... is crucial. Quoting Aza Raskin "awareness brings me the opportunity of choice".
I chose "other" in the poll. I think the most pressing issue is holding them accountable for training on unethically gained data. If you think about how far privacy laws have come where every website needs a cookie notice and a policy to say what it's doing with your data, but these AI firms think they are above that. They just scrape and use what they want with no regard for whom it affects.
Definitely Roxane, in fact, we believe that 2024 might be the year where we finally get more tools and legal mechanisms that support the vision that you outline here. Your contribution from earlier last year made some critical points on this as well which is worth reading for others who will come across this thread: https://montrealethics.ai/the-impact-of-ai-art-on-the-creative-industry/
While this is not a full answer, the following paper addresses a close question:
https://arxiv.org/abs/2309.13057
The Return on Investment in AI Ethics: A Holistic Framework published in the Hawaii International Conference on System Sciences (HICSS) 2024 Proceedings
Indeed, in fact, we have a research summary for this paper that will be published shortly. Thank you for surfacing it!
Such work is going to be increasingly important as we take business resources allocation into consideration when implementing RAI programs.
The most pressing issue to solve in 2024 is the AI alignment problem. https://www.lesswrong.com/tag/ai
Thank you for sharing Miguel, there are some good initiatives from Anthropic and DeepMind that are working towards addressing alignment issues.
From our point of view, 2024 stands up to be the year for AI Ethics, talking about it and raising awareness is the first step to establish the perfect conditions for a Responsible AI environment. Bringing it from very technical or institutional areas to other aspects of our daily lives, such as digital commerce, society, education... is crucial. Quoting Aza Raskin "awareness brings me the opportunity of choice".
Well said, and especially the crucial work to be done needs to have a basis in awareness of the nuances as well for it to be meaningful.
I chose "other" in the poll. I think the most pressing issue is holding them accountable for training on unethically gained data. If you think about how far privacy laws have come where every website needs a cookie notice and a policy to say what it's doing with your data, but these AI firms think they are above that. They just scrape and use what they want with no regard for whom it affects.
Definitely Roxane, in fact, we believe that 2024 might be the year where we finally get more tools and legal mechanisms that support the vision that you outline here. Your contribution from earlier last year made some critical points on this as well which is worth reading for others who will come across this thread: https://montrealethics.ai/the-impact-of-ai-art-on-the-creative-industry/