7 Comments
Dec 21, 2023Liked by Montreal AI Ethics Institute

"One of the questions that has been bouncing around in the staff’s heads is how to square the resource demands for implementing Responsible AI within an organization? In particular, we keep running into scenarios where the organization is interested in implementing a Responsible AI program but has very little idea and sometimes no planned commitment for dedicating resources towards the implementation of that program. How should we navigate this challenge?"

Unfortunatly, showcasing from a moral standpoint that it's just the right thing to do isn't enough. The angle to take is to make executives realize the concrete and strategical benefits of implementing Responsible AI for their organization. To do this, one should start with building and creating company ethics guidelines, and then implementing them.

We can look at this from the lens of performativity.

The following ideas are taken from this amazing Mooc on AI and ethics https://ethics-of-ai.mooc.fi/chapter-7/2-ethics-as-doing. I'll summarize and quote some passages below:

"[...] performativity is the capacity of words to do things in the world. That is, sometimes making statements does not just describe the world, but also performs a social function beyond describing. As an example, when a priest declares a couple to be “husband and wife”, they are not describing the state of their relationship. Rather, the institution of marriage is brought about by that very declaration – the words perform the marrying of the two people."

Similarly, ethics guidelines can serve as performative texts:

"Guidelines as assurances:

Others have argued that ethics guidelines work as assurance to investors and the public (Kerr 2020). That is, in the age of social media, news of businesses’ moral misgivings spread fast and can cause quick shifts in a company’s public image. Publishing ethics guidelines makes assurances that the organization has the competence for producing ethical language, and the capacity to take part in public moral discussions to soothe public concern.

Thus AI ethics guidelines work to deflect critique away from companies; from both investors and the general public. That is, if the company is seen as being able to manage and anticipate the ethical critique produced by journalists, regulators and civil society, the company will also be seen as a stable investment, with the competence to navigate public discourses that may otherwise be harmful for its outlook."

"Guidelines as expertise:

With the AI boom well underway, the need for new kinds of expertise arises, and competition around ownership of the AI issue increases. That is, the negotiations around AI regulation, the creation of AI-driven projects of governmental redesign, the implementation of AI in new fields, and the public discourse around AI ethics in the news all demand expertise in AI and especially the intersection of AI and society.

To be seen as an expert yields certain forms of power. Being seen as an AI ethics expert gives some say in what the future of society will look like. Taking part in the AI ethics discussion by publishing a set of ethical guidelines is a way to demonstrate expertise, increasing the organization’s chances of being invited to a seat at the table in regards to future AI issues."

The above implies massive benefits for a company if properly done and communicated. Not only is it the right thing to do in the face of increasingly ubiquitous and capable AI, but it is in my view a indispensable strategic advantage to focus on.

But just talk isn't enough. To truely cement oneself as an trustworthy expert and to not fall into the trap of ethics washing, one needs to also implement the guidelines in a way where tangible changes are made to the way the company is doing AI. This will reinforce what was mentioned above.

Then, the company can start implementing some of the great practical things you have suggested in the previous newsletter which will be easier to do once a ethics team is in place.

Just some thoughts.

I love that you are thinking about how to make ethics more practical. I'm taking notes and researching that myself too. I've also been wondering what the best approaches could be to get more people interested and involved in ethics and to not just focus on the techincal aspects. From my POV this is more of a challenge with applied AI practitionners than researchers.

Expand full comment

I'm still wondering about the ethics of this kind of strategy since it wouldn't come from a "let's make society better by implementing responsible AI" and more "we need this as a strategical advantage - (but we don't care about responsible AI otherwise)".

Is the above approach ok if it means more companies adopt responsible AI?

I feel like most people care and it's just a matter of unlocking company budget and resources. So in that sense it's ok. Still a bit uneasy about it though.

Expand full comment
author

Absolutely Julien, several great points made by you here in this thread. In particular, for business executives, it is important for them to see that governance and implementation of RAI itself can be a competitive advantage which can help move away from the performative nature to something that is more tangible as you call out. We've written about that in the past here, for example: https://www.weforum.org/agenda/2022/11/artificial-intelligence-invest-responsible-ai/

Another lens that helps to cement this point that RAI is not just about avoiding risk, it can genuinely lead to better product outcomes, which in the long-run benefits the business while achieving some of the moral imperatives as well: https://www.bcg.com/publications/2022/a-responsible-ai-leader-does-more-than-just-avoiding-risk

Finally, when it comes to stakeholder engagement, it is essential that we really embody that as an idea in action rather than just doing it for the sake of "theater": https://venturebeat.com/ai/getting-stakeholder-engagement-right-in-responsible-ai/

Really appreciate your deep engagement on this important topic here with us! Keep it coming - we're happy to support your research work and appreciate you being an engaged reader!

Expand full comment
May 13Liked by Montreal AI Ethics Institute

I believe that one of the things missing here from the Ai ethics / responsible AI conversation, is indeed ethics... and by that pithy statement I mean the idea that having principles, duties, and/or values is worth having in of itself. That a business (an organisation full of people generally.... though don't get me started on DAOs) can indeed reflect on more than its profit margins is maybe a useful thing for building societies that hopefully function better, and that the outcomes of their products, when applying ethics, could be "better" or simply cause less damage than not. That is part of the sell of Responsible AI, though maybe not a very compelling one I agree

Expand full comment
author

Definitely Ben and we think that is one of the key challenges is how to make things that we know will lead to better outcomes into formats that make them compelling enough to mandate uptake.

Expand full comment
Dec 20, 2023·edited Dec 20, 2023Liked by Montreal AI Ethics Institute

There is another worry with respect to open sourcing large language models. It stems from the view that their research in general, needs to slow down or be halted. This is because there are risks involved in continuing to roll out increasingly capable models and features, all the while our understanding of these models are very limited. We currently don't know how to align the models in a way where they won't kill us once they are smart enough. Since we only have 1 go at tackling this problem, we need much more time and research.

Therefore, open sourcing models will ultimately accelerate LLM development and make it harder to regulate and put any risk prevention guardrails in place. In doing so, we'd simply be accelerating out extinction.

This is a view that Eliezer Yudkowsky for instances, but also many others hold (simplifying and his view is much more nuanced and well put together than my summary attempt).

I personally think that the more researchers are thinking about the problem, the better. Progress and decisions around LLMs shouldn't be limited to a few companies. While the risks mentioned above are real, I believe that we need the community as a whole to be thinking and working on how to implement safe AI. Also, the current market incentives are such that top tech companies are prone to ethics washing. They'll be more inclined to release models before any serious testing is done in order to capture more market share and establish themselves in the future.

Ultimately it is a super difficult ethical and moral problem that I am only starting to wrap my head around. I'm still very uncertain about which option is better between close vs open source models.

Thank you for your work and promoting discussions around this topic.

Expand full comment
author

Thank you for sharing this Julien, there is certainly a lot to unpack and our goal with these questions and discussions is to elevate the level of conversation and inject nuance into the discussion. We appreciate your engagement and hope others can chime in with their thoughts as well. Thanks!

Expand full comment