Absolutely Julien, several great points made by you here in this thread. In particular, for business executives, it is important for them to see that governance and implementation of RAI itself can be a competitive advantage which can help move away from the performative nature to something that is more tangible as you call out. We've wr…
Absolutely Julien, several great points made by you here in this thread. In particular, for business executives, it is important for them to see that governance and implementation of RAI itself can be a competitive advantage which can help move away from the performative nature to something that is more tangible as you call out. We've written about that in the past here, for example: https://www.weforum.org/agenda/2022/11/artificial-intelligence-invest-responsible-ai/
Really appreciate your deep engagement on this important topic here with us! Keep it coming - we're happy to support your research work and appreciate you being an engaged reader!
I believe that one of the things missing here from the Ai ethics / responsible AI conversation, is indeed ethics... and by that pithy statement I mean the idea that having principles, duties, and/or values is worth having in of itself. That a business (an organisation full of people generally.... though don't get me started on DAOs) can indeed reflect on more than its profit margins is maybe a useful thing for building societies that hopefully function better, and that the outcomes of their products, when applying ethics, could be "better" or simply cause less damage than not. That is part of the sell of Responsible AI, though maybe not a very compelling one I agree
Definitely Ben and we think that is one of the key challenges is how to make things that we know will lead to better outcomes into formats that make them compelling enough to mandate uptake.
Absolutely Julien, several great points made by you here in this thread. In particular, for business executives, it is important for them to see that governance and implementation of RAI itself can be a competitive advantage which can help move away from the performative nature to something that is more tangible as you call out. We've written about that in the past here, for example: https://www.weforum.org/agenda/2022/11/artificial-intelligence-invest-responsible-ai/
Another lens that helps to cement this point that RAI is not just about avoiding risk, it can genuinely lead to better product outcomes, which in the long-run benefits the business while achieving some of the moral imperatives as well: https://www.bcg.com/publications/2022/a-responsible-ai-leader-does-more-than-just-avoiding-risk
Finally, when it comes to stakeholder engagement, it is essential that we really embody that as an idea in action rather than just doing it for the sake of "theater": https://venturebeat.com/ai/getting-stakeholder-engagement-right-in-responsible-ai/
Really appreciate your deep engagement on this important topic here with us! Keep it coming - we're happy to support your research work and appreciate you being an engaged reader!
I believe that one of the things missing here from the Ai ethics / responsible AI conversation, is indeed ethics... and by that pithy statement I mean the idea that having principles, duties, and/or values is worth having in of itself. That a business (an organisation full of people generally.... though don't get me started on DAOs) can indeed reflect on more than its profit margins is maybe a useful thing for building societies that hopefully function better, and that the outcomes of their products, when applying ethics, could be "better" or simply cause less damage than not. That is part of the sell of Responsible AI, though maybe not a very compelling one I agree
Definitely Ben and we think that is one of the key challenges is how to make things that we know will lead to better outcomes into formats that make them compelling enough to mandate uptake.