AI Ethics Brief #82: AI ethics tools, proliferation of principles, commitment to content, and more ...
What are Ubuntu's implications for philosophical ethics?
Welcome to another edition of the Montreal AI Ethics Institute’s weekly AI Ethics Brief that will help you keep up with the fast-changing world of AI Ethics! Every week we summarize the best of AI Ethics research and reporting, along with some commentary. More about us at montrealethics.ai/about.
⏰ This week’s Brief is a ~19-minute read.
Support our work through Substack
💖 To keep our content free for everyone, we ask those who can, to support us: become a paying subscriber for as little as $5/month. If you’d prefer to make a one-time donation, visit our donation page. We use this Wikipedia-style tipping model to support the democratization of AI ethics literacy and to ensure we can continue to serve our community.
*NOTE: When you hit the subscribe button, you may end up on the Substack page where if you’re not already signed into Substack, you’ll be asked to enter your email address again. Please do so and you’ll be directed to the page where you can purchase the subscription.
This week’s overview:
✍️ What we’re thinking:
The Proliferation of AI Ethics Principles: What’s Next?
🔬 Research summaries:
Ubuntu’s Implications for Philosophical Ethics
Putting AI ethics to work: are the tools fit for purpose?
Getting from Commitment to Content in AI and Data Ethics: Justice and Explainability
📰 Article summaries:
Thinking Through the Ethics of New Tech…Before There’s a Problem
“The power to surveil, control, and punish”: The dystopian danger of a mandatory biometric database in Mexico
How social media companies help African governments abuse “disinformation laws” to target critics
📅 Event
AI and Space
📖 Living Dictionary:
Deep Learning
🌐 From elsewhere on the web:
The Montreal Integrity Network: Workshop on the Ethics of Artificial Intelligence
McGill AI Society: Panel discussion on AI Ethics
💡 ICYMI
The Role of Arts in Shaping AI
But first, our call-to-action this week:
As the year draws to a close, we invite you to share the gift of knowledge by sharing The AI Ethics Brief with those who don’t know about it yet, and perhaps also considering a gift subscription contributing to the community work done at the Montreal AI Ethics Institute!
✍️ What we’re thinking:
The Proliferation of AI Ethics Principles: What’s Next?
With the rise of AI and the recognition of its impacts on people and the environment, more and more organizations formulate principles for the development of ethical AI systems. There are now dozens of documents containing hundreds of principles, written by governments, corporations, non-profits, and academics. This proliferation of principles presents challenges. For example, should organizations continue to produce new principles, or should they endorse existing ones? If organizations are to endorse existing principles, which ones? And which of the principles should inform regulation?
In the face of the proliferation of AI ethics principles, it is natural to seek a core set of principles or unifying themes. The hope might be that the core set of principles would save organizations from reinventing the wheel, prevent them from cherry-picking principles, be used for regulation, etc. In the last few years, several teams of researchers have set out to articulate such a set of core AI ethics principles.
These overviews of AI ethics principles illuminate the landscape. In addition, they highlight the limitations of the search for unifying themes. They help us see that it is unlikely that a unique set of core principles will be found. And that, even if it is found, universally applying it runs the risk of exacerbating power imbalances.
To delve deeper, read the full article here.
🔬 Research summaries:
Ubuntu’s Implications for Philosophical Ethics
Philosophers have been puzzled over searching for an underlying principle expounded by a moral theory for over 400 years. Through his talk, Thaddeus Metz demonstrates how Ubuntu is also worth considering in the journey to solving this puzzle.
To delve deeper, read the full summary here.
Putting AI ethics to work: are the tools fit for purpose?
This paper maps the landscape of AI ethics tools: It develops a typology to classify AI ethics tools and analyzes the existing ones. In addition, the paper identifies two gaps. First, key stakeholders, including members of marginalized communities, under-participate in using AI ethics tools and their outputs. Second, there is a lack of tools for external auditing in AI ethics, which is a barrier to the accountability and trustworthiness of organizations that develop AI systems.
To delve deeper, read the full summary here.
Getting from Commitment to Content in AI and Data Ethics: Justice and Explainability
AI or data ethics principles or frameworks meant to demonstrate a commitment to addressing the challenges posed by AI are ubiquitous and are an ‘easy first step’. However, the harder task is to operationalize them. This report, inter alia, stipulates strategies for putting those principles into practice.
To delve deeper, read the full summary here.
📰 Article summaries:
Thinking Through the Ethics of New Tech…Before There’s a Problem
What happened: The article points out how society has typically stopped to address ethical issues with the use of new technology after it rapidly permeates our life; the author asks us to imagine what would happen if that is not the case? Taking the examples of automobile safety features like the seatbelt that appeared many years after which if implemented earlier would have saved many lives. We are on a similar cusp with AI rapidly permeating many parts of our lives. By bringing in specialists who are domain experts, eschewing haste in the deployment of a new piece of technology because it seems to offer immediate gains, but one that might have delayed, severe downstream consequences and assigning accountability to stakeholders in different parts of the lifecycle and having someone in a leadership position take this on as a core responsibility are some proposed ways that we might be able to mitigate these issues.
Why it matters: As organizations struggle to move from principles to practices, the advice offered in this article is a great starting point for those who want to realize some early wins in the ethical, safe, and inclusive deployment of AI systems. Reframing the challenges as opportunities to do better and guide others along the way might be yet another benefit emerging from adopting these practices.
Between the lines: I think another layer of nuance needs to be added to the advice of “pausing and thinking,” which is to understand the incentives that guide employee and stakeholder behaviour within the organization. In particular, if KPIs are such that a certain number of users need to be secured or a certain sales quota needs to be met to secure a bonus at the end of the year, then we need to make sure that these ideas are discussed keeping this in mind, otherwise implementations of tech ethics are doomed to fail.
What happened: Mexico, empowered by a loan from the World Bank, is pushing hard to implement a unified national identity scheme with a view to make access to government services and public benefits linked through a single system. Similar initiatives have been funded by the World Bank in countries around the world, and in many places the implementation of such national identity schemes has led to less than desired outcomes. In particular, biometrics associated with the identities tend to result in failures at the point of receiving the services due to problems with the technology that is deployed to ascertain identity, such facial recognition and fingerprint scanning.
Why it matters: Once such a system is put in place, it is incredibly difficult to extricate the provision of the services from such a system. In a country where crime infiltrates various levels of government and where there is a risk for cybersecurity breaches, potentially compromising all the identities and biometrics of the people who have enrolled in the scheme, such a system might create more problems than it solves. Given all these problems, and the potential that all identities linked into a single scheme can cause a single point of failure and give too much authoritarian power to governments to surveil, we need to be careful before proceeding.
Between the lines: One of the things that stands out in the article is that such national identity schemes are pushed heavily in the Global South under the guise of improving access to public benefits and government services, but this might not be an ideal approach if there isn’t adequate supporting infrastructure that can ensure that such a scheme is implemented with privacy safeguards and security measures in place. Also, we need to be cognizant of the fact that there continue to exist alternative ways for people to access services who aren’t able to enroll in the scheme or experience failures in the verification process at the point of service provision due to hardware failures, denying them access to essential services like free rations and healthcare in certain places.
How social media companies help African governments abuse “disinformation laws” to target critics
What happened: The article describes how the combination of vaguely defined disinformation laws in countries like Kenya, Uganda, Malawi, and Nigeria, instead of clamping down on disinformation actually restricts legitimate speech more. This is exacerbated by the fact that social media platforms are also limited in their approach of addressing disinformation problems on their platforms, such as narrow approaches like simply taking down content. Sometimes, these vague laws have also led to internet shutdowns in African nations. In the midst of all this, the regulations mostly serve the interests of the government while the policies of social media platforms mostly serve the companies themselves. The fundamental rights of end users are mostly ignored.
Why it matters: The article does point to some fundamental texts in the space like the Santa Clara Principles (MAIEI provided comments to it) that can serve as guides on effective regulation of the problem of disinformation such that fundamental rights of end users are still protected. Concretely, creating something that is soft law in the beginning based on these guidelines and then determining which parts of it work and which don’t, can be moved into the hard law territory.
Between the lines: A shared responsibility model where we have many actors who are jointly responsible for governing how the disinformation challenge is addressed on social media platforms is going to be essential. More so, I believe that elevating media and digital literacy will offer yet another effective avenue to combat this problem, further bolstering the efforts that emerge on the technical and policy fronts to address these challenges.
📖 From our Living Dictionary:
“Deep Learning”
👇 Learn more about why it matters in AI Ethics via our Living Dictionary.
🌐 From elsewhere on the web:
The Montreal Integrity Network: Workshop on the Ethics of Artificial Intelligence
Our Partnerships Manager, Connor Wright, is speaking at a session hosted by the Montreal Integrity Network in an enriching discussion about AI applications and their ethical considerations in a corporate environment. They’ll cover the following topics: What is AI, and what it is not, the current state of AI, and future trends, and ethical challenges of AI applications.
McGill AI Society: Panel discussion on AI Ethics
Our Business Development Manager, Masa Sweidan, is speaking at a session hosted by the McGill AI Society to undergraduate and graduate students at McGill University. The idea is to give the students multiple perspectives on the importance of ethical AI, present the current real challenges, and bring them to reflect on the ethical issues that arise within AI systems.
💡 In case you missed it:
The Role of Arts in Shaping AI
Art is an important tool for educating society about our cultural and natural histories. It’s also useful for identifying and solving our present challenges. In this paper, researchers Ramya Srinivasan and Kanji Uchino explain why we should tap into the arts to not only educate society about AI but to also create more ethical systems.
To delve deeper, read the full summary here.
Take Action:
In collaboration with the Institute for Future Research in Turkey and with the support from the Canadian Embassy in Istanbul, MAIEI presents a discussion on AI and Space.
The session will include two presentations from experts from the Institute for Future Research. We will then split into two breakout rooms, with one based on “AI and Space Ethics for Human Life” and the other on “AI and Space Ethics for Technology.”, before concluding our discussion with some closing remarks.
The readings that will form part of the background reading for the event will be emailed to participants shortly!
We look forward to seeing you there!