"One of the questions that has been bouncing around in the staff’s heads is how to square the resource demands for implementing Responsible AI within an organization? In particular, we keep running into scenarios where the organization is interested in implementing a Responsible AI program but has very little idea and sometimes no planne…
"One of the questions that has been bouncing around in the staff’s heads is how to square the resource demands for implementing Responsible AI within an organization? In particular, we keep running into scenarios where the organization is interested in implementing a Responsible AI program but has very little idea and sometimes no planned commitment for dedicating resources towards the implementation of that program. How should we navigate this challenge?"
Unfortunatly, showcasing from a moral standpoint that it's just the right thing to do isn't enough. The angle to take is to make executives realize the concrete and strategical benefits of implementing Responsible AI for their organization. To do this, one should start with building and creating company ethics guidelines, and then implementing them.
We can look at this from the lens of performativity.
"[...] performativity is the capacity of words to do things in the world. That is, sometimes making statements does not just describe the world, but also performs a social function beyond describing. As an example, when a priest declares a couple to be “husband and wife”, they are not describing the state of their relationship. Rather, the institution of marriage is brought about by that very declaration – the words perform the marrying of the two people."
Similarly, ethics guidelines can serve as performative texts:
"Guidelines as assurances:
Others have argued that ethics guidelines work as assurance to investors and the public (Kerr 2020). That is, in the age of social media, news of businesses’ moral misgivings spread fast and can cause quick shifts in a company’s public image. Publishing ethics guidelines makes assurances that the organization has the competence for producing ethical language, and the capacity to take part in public moral discussions to soothe public concern.
Thus AI ethics guidelines work to deflect critique away from companies; from both investors and the general public. That is, if the company is seen as being able to manage and anticipate the ethical critique produced by journalists, regulators and civil society, the company will also be seen as a stable investment, with the competence to navigate public discourses that may otherwise be harmful for its outlook."
"Guidelines as expertise:
With the AI boom well underway, the need for new kinds of expertise arises, and competition around ownership of the AI issue increases. That is, the negotiations around AI regulation, the creation of AI-driven projects of governmental redesign, the implementation of AI in new fields, and the public discourse around AI ethics in the news all demand expertise in AI and especially the intersection of AI and society.
To be seen as an expert yields certain forms of power. Being seen as an AI ethics expert gives some say in what the future of society will look like. Taking part in the AI ethics discussion by publishing a set of ethical guidelines is a way to demonstrate expertise, increasing the organization’s chances of being invited to a seat at the table in regards to future AI issues."
The above implies massive benefits for a company if properly done and communicated. Not only is it the right thing to do in the face of increasingly ubiquitous and capable AI, but it is in my view a indispensable strategic advantage to focus on.
But just talk isn't enough. To truely cement oneself as an trustworthy expert and to not fall into the trap of ethics washing, one needs to also implement the guidelines in a way where tangible changes are made to the way the company is doing AI. This will reinforce what was mentioned above.
Then, the company can start implementing some of the great practical things you have suggested in the previous newsletter which will be easier to do once a ethics team is in place.
Just some thoughts.
I love that you are thinking about how to make ethics more practical. I'm taking notes and researching that myself too. I've also been wondering what the best approaches could be to get more people interested and involved in ethics and to not just focus on the techincal aspects. From my POV this is more of a challenge with applied AI practitionners than researchers.
I'm still wondering about the ethics of this kind of strategy since it wouldn't come from a "let's make society better by implementing responsible AI" and more "we need this as a strategical advantage - (but we don't care about responsible AI otherwise)".
Is the above approach ok if it means more companies adopt responsible AI?
I feel like most people care and it's just a matter of unlocking company budget and resources. So in that sense it's ok. Still a bit uneasy about it though.
Absolutely Julien, several great points made by you here in this thread. In particular, for business executives, it is important for them to see that governance and implementation of RAI itself can be a competitive advantage which can help move away from the performative nature to something that is more tangible as you call out. We've written about that in the past here, for example: https://www.weforum.org/agenda/2022/11/artificial-intelligence-invest-responsible-ai/
Really appreciate your deep engagement on this important topic here with us! Keep it coming - we're happy to support your research work and appreciate you being an engaged reader!
I believe that one of the things missing here from the Ai ethics / responsible AI conversation, is indeed ethics... and by that pithy statement I mean the idea that having principles, duties, and/or values is worth having in of itself. That a business (an organisation full of people generally.... though don't get me started on DAOs) can indeed reflect on more than its profit margins is maybe a useful thing for building societies that hopefully function better, and that the outcomes of their products, when applying ethics, could be "better" or simply cause less damage than not. That is part of the sell of Responsible AI, though maybe not a very compelling one I agree
Definitely Ben and we think that is one of the key challenges is how to make things that we know will lead to better outcomes into formats that make them compelling enough to mandate uptake.
"One of the questions that has been bouncing around in the staff’s heads is how to square the resource demands for implementing Responsible AI within an organization? In particular, we keep running into scenarios where the organization is interested in implementing a Responsible AI program but has very little idea and sometimes no planned commitment for dedicating resources towards the implementation of that program. How should we navigate this challenge?"
Unfortunatly, showcasing from a moral standpoint that it's just the right thing to do isn't enough. The angle to take is to make executives realize the concrete and strategical benefits of implementing Responsible AI for their organization. To do this, one should start with building and creating company ethics guidelines, and then implementing them.
We can look at this from the lens of performativity.
The following ideas are taken from this amazing Mooc on AI and ethics https://ethics-of-ai.mooc.fi/chapter-7/2-ethics-as-doing. I'll summarize and quote some passages below:
"[...] performativity is the capacity of words to do things in the world. That is, sometimes making statements does not just describe the world, but also performs a social function beyond describing. As an example, when a priest declares a couple to be “husband and wife”, they are not describing the state of their relationship. Rather, the institution of marriage is brought about by that very declaration – the words perform the marrying of the two people."
Similarly, ethics guidelines can serve as performative texts:
"Guidelines as assurances:
Others have argued that ethics guidelines work as assurance to investors and the public (Kerr 2020). That is, in the age of social media, news of businesses’ moral misgivings spread fast and can cause quick shifts in a company’s public image. Publishing ethics guidelines makes assurances that the organization has the competence for producing ethical language, and the capacity to take part in public moral discussions to soothe public concern.
Thus AI ethics guidelines work to deflect critique away from companies; from both investors and the general public. That is, if the company is seen as being able to manage and anticipate the ethical critique produced by journalists, regulators and civil society, the company will also be seen as a stable investment, with the competence to navigate public discourses that may otherwise be harmful for its outlook."
"Guidelines as expertise:
With the AI boom well underway, the need for new kinds of expertise arises, and competition around ownership of the AI issue increases. That is, the negotiations around AI regulation, the creation of AI-driven projects of governmental redesign, the implementation of AI in new fields, and the public discourse around AI ethics in the news all demand expertise in AI and especially the intersection of AI and society.
To be seen as an expert yields certain forms of power. Being seen as an AI ethics expert gives some say in what the future of society will look like. Taking part in the AI ethics discussion by publishing a set of ethical guidelines is a way to demonstrate expertise, increasing the organization’s chances of being invited to a seat at the table in regards to future AI issues."
The above implies massive benefits for a company if properly done and communicated. Not only is it the right thing to do in the face of increasingly ubiquitous and capable AI, but it is in my view a indispensable strategic advantage to focus on.
But just talk isn't enough. To truely cement oneself as an trustworthy expert and to not fall into the trap of ethics washing, one needs to also implement the guidelines in a way where tangible changes are made to the way the company is doing AI. This will reinforce what was mentioned above.
Then, the company can start implementing some of the great practical things you have suggested in the previous newsletter which will be easier to do once a ethics team is in place.
Just some thoughts.
I love that you are thinking about how to make ethics more practical. I'm taking notes and researching that myself too. I've also been wondering what the best approaches could be to get more people interested and involved in ethics and to not just focus on the techincal aspects. From my POV this is more of a challenge with applied AI practitionners than researchers.
I'm still wondering about the ethics of this kind of strategy since it wouldn't come from a "let's make society better by implementing responsible AI" and more "we need this as a strategical advantage - (but we don't care about responsible AI otherwise)".
Is the above approach ok if it means more companies adopt responsible AI?
I feel like most people care and it's just a matter of unlocking company budget and resources. So in that sense it's ok. Still a bit uneasy about it though.
Absolutely Julien, several great points made by you here in this thread. In particular, for business executives, it is important for them to see that governance and implementation of RAI itself can be a competitive advantage which can help move away from the performative nature to something that is more tangible as you call out. We've written about that in the past here, for example: https://www.weforum.org/agenda/2022/11/artificial-intelligence-invest-responsible-ai/
Another lens that helps to cement this point that RAI is not just about avoiding risk, it can genuinely lead to better product outcomes, which in the long-run benefits the business while achieving some of the moral imperatives as well: https://www.bcg.com/publications/2022/a-responsible-ai-leader-does-more-than-just-avoiding-risk
Finally, when it comes to stakeholder engagement, it is essential that we really embody that as an idea in action rather than just doing it for the sake of "theater": https://venturebeat.com/ai/getting-stakeholder-engagement-right-in-responsible-ai/
Really appreciate your deep engagement on this important topic here with us! Keep it coming - we're happy to support your research work and appreciate you being an engaged reader!
I believe that one of the things missing here from the Ai ethics / responsible AI conversation, is indeed ethics... and by that pithy statement I mean the idea that having principles, duties, and/or values is worth having in of itself. That a business (an organisation full of people generally.... though don't get me started on DAOs) can indeed reflect on more than its profit margins is maybe a useful thing for building societies that hopefully function better, and that the outcomes of their products, when applying ethics, could be "better" or simply cause less damage than not. That is part of the sell of Responsible AI, though maybe not a very compelling one I agree
Definitely Ben and we think that is one of the key challenges is how to make things that we know will lead to better outcomes into formats that make them compelling enough to mandate uptake.