5 Comments
Dec 21, 2023Liked by Montreal AI Ethics Institute

"One of the questions that has been bouncing around in the staff’s heads is how to square the resource demands for implementing Responsible AI within an organization? In particular, we keep running into scenarios where the organization is interested in implementing a Responsible AI program but has very little idea and sometimes no planned commitment for dedicating resources towards the implementation of that program. How should we navigate this challenge?"

Unfortunatly, showcasing from a moral standpoint that it's just the right thing to do isn't enough. The angle to take is to make executives realize the concrete and strategical benefits of implementing Responsible AI for their organization. To do this, one should start with building and creating company ethics guidelines, and then implementing them.

We can look at this from the lens of performativity.

The following ideas are taken from this amazing Mooc on AI and ethics https://ethics-of-ai.mooc.fi/chapter-7/2-ethics-as-doing. I'll summarize and quote some passages below:

"[...] performativity is the capacity of words to do things in the world. That is, sometimes making statements does not just describe the world, but also performs a social function beyond describing. As an example, when a priest declares a couple to be “husband and wife”, they are not describing the state of their relationship. Rather, the institution of marriage is brought about by that very declaration – the words perform the marrying of the two people."

Similarly, ethics guidelines can serve as performative texts:

"Guidelines as assurances:

Others have argued that ethics guidelines work as assurance to investors and the public (Kerr 2020). That is, in the age of social media, news of businesses’ moral misgivings spread fast and can cause quick shifts in a company’s public image. Publishing ethics guidelines makes assurances that the organization has the competence for producing ethical language, and the capacity to take part in public moral discussions to soothe public concern.

Thus AI ethics guidelines work to deflect critique away from companies; from both investors and the general public. That is, if the company is seen as being able to manage and anticipate the ethical critique produced by journalists, regulators and civil society, the company will also be seen as a stable investment, with the competence to navigate public discourses that may otherwise be harmful for its outlook."

"Guidelines as expertise:

With the AI boom well underway, the need for new kinds of expertise arises, and competition around ownership of the AI issue increases. That is, the negotiations around AI regulation, the creation of AI-driven projects of governmental redesign, the implementation of AI in new fields, and the public discourse around AI ethics in the news all demand expertise in AI and especially the intersection of AI and society.

To be seen as an expert yields certain forms of power. Being seen as an AI ethics expert gives some say in what the future of society will look like. Taking part in the AI ethics discussion by publishing a set of ethical guidelines is a way to demonstrate expertise, increasing the organization’s chances of being invited to a seat at the table in regards to future AI issues."

The above implies massive benefits for a company if properly done and communicated. Not only is it the right thing to do in the face of increasingly ubiquitous and capable AI, but it is in my view a indispensable strategic advantage to focus on.

But just talk isn't enough. To truely cement oneself as an trustworthy expert and to not fall into the trap of ethics washing, one needs to also implement the guidelines in a way where tangible changes are made to the way the company is doing AI. This will reinforce what was mentioned above.

Then, the company can start implementing some of the great practical things you have suggested in the previous newsletter which will be easier to do once a ethics team is in place.

Just some thoughts.

I love that you are thinking about how to make ethics more practical. I'm taking notes and researching that myself too. I've also been wondering what the best approaches could be to get more people interested and involved in ethics and to not just focus on the techincal aspects. From my POV this is more of a challenge with applied AI practitionners than researchers.

Expand full comment
Dec 20, 2023·edited Dec 20, 2023Liked by Montreal AI Ethics Institute

There is another worry with respect to open sourcing large language models. It stems from the view that their research in general, needs to slow down or be halted. This is because there are risks involved in continuing to roll out increasingly capable models and features, all the while our understanding of these models are very limited. We currently don't know how to align the models in a way where they won't kill us once they are smart enough. Since we only have 1 go at tackling this problem, we need much more time and research.

Therefore, open sourcing models will ultimately accelerate LLM development and make it harder to regulate and put any risk prevention guardrails in place. In doing so, we'd simply be accelerating out extinction.

This is a view that Eliezer Yudkowsky for instances, but also many others hold (simplifying and his view is much more nuanced and well put together than my summary attempt).

I personally think that the more researchers are thinking about the problem, the better. Progress and decisions around LLMs shouldn't be limited to a few companies. While the risks mentioned above are real, I believe that we need the community as a whole to be thinking and working on how to implement safe AI. Also, the current market incentives are such that top tech companies are prone to ethics washing. They'll be more inclined to release models before any serious testing is done in order to capture more market share and establish themselves in the future.

Ultimately it is a super difficult ethical and moral problem that I am only starting to wrap my head around. I'm still very uncertain about which option is better between close vs open source models.

Thank you for your work and promoting discussions around this topic.

Expand full comment