The AI Ethics Brief #41: Ethics owners, diagnosing gender bias, Buddhism in AI ethics, and more ...
What is the price that we pay when there is a high level of mistrust in society?
Welcome to another edition of the Montreal AI Ethics Institute’s weekly AI Ethics Brief that will help you keep up with the fast-changing world of AI Ethics! Every week we summarize the best of AI Ethics research and reporting, along with some commentary. More about us at montrealethics.ai/about.
Consider supporting our work through Substack
We’re trying a Guardian/Wikipedia-style tipping model: everyone will continue to get the same content but those who can pay for a subscription will support free access (to our newsletter & events) for everyone else. Each contribution, however big or small, helps ensure we can continue to produce quality research, content, and events.
Here’s why Sundar Narayanan (Director, Nexdigm) became a founding supporter of our newsletter:
NOTE: When you hit the subscribe button below, you will end up on the Substack page where if you’re not already signed into Substack, you’ll be asked to enter your email address again. Please do so and you’ll be directed to the page where you can purchase the subscription.
Support our work today with as little as $5/month
This week’s overview:
What we are thinking:
From the Founder’s Desk: Introduction to Ethics in the Use of AI in War: Part 1
The Sociology of AI Ethics: Diagnosing Gender Bias In Image Recognition Systems (Research Summary)
Research summaries:
Bridging the Gap: The Case For an ‘Incompletely Theorized Agreement’ on AI Policy
Theorizing Femininity in AI: A Framework for Undoing Technology’s Gender Troubles
The Ethics Owners — A New Model of Organizational Responsibility in Data-Driven Technology Companies
Article summaries:
What Buddhism can do for AI Ethics (MIT Tech Review)
How Social Media’s Obsession with Scale Supercharged Disinformation (Harvard Business Review)
China wants to build an open source ecosystem to rival GitHub (Rest of World)
Facial Recognition Technology Isn’t Good Just Because It’s Used to Arrest Neo-Nazis (Slate)
The High Price of Mistrust (Farnam Street)
Will Parler Prevail in Its Antitrust Case Against Amazon? (Knowledge @ Wharton)
But first, our call-to-action of the week:
The State of AI Ethics Report January 2021 was published last week and it captures the most relevant developments in research and reporting from the past quarter!
150+ pages that save you 150+ hours!
✍️ What we’re thinking:
From the Founder’s Desk:
Introduction to Ethics in the Use of AI in War: Part 1
Advances in AI have spilled over into usage in defense applications and there are rightly many ethical concerns that it raises. While there are many detailed documents available that talk about specific areas of concern in the use of AI in warfighting application, I'd like to give here an overview of those issues and talk about some basic ideas that will help you discuss and address issues in the space in a more informed manner.
To delve deeper, read the full article here.
The Sociology of AI Ethics:
Diagnosing Gender Bias In Image Recognition Systems by Carsten Schwemmer, Carly Knight, Emily D. Bello-Pardo, Stan Oklobdzija, Martijin Schoonvelde, and Jeffrey W. Lockhart
This paper examines gender biases in commercial vision recognition systems. Specifically, the authors show how these systems classify, label, and annotate images of women and men differently. They conclude that researchers should be careful using labels produced by such systems in their research. The paper also produces a template for social scientists to evaluate those systems before deploying them.
To delve deeper, read the full summary here.
🔬 Research summaries:
Bridging the Gap: The Case For an ‘Incompletely Theorized Agreement’ on AI Policy by Charlotte Stix and Matthijs Maas
In this paper, Charlotte Stix and Matthijs Maas argue for more collaboration between those focused on ‘near-term’ and ‘long-term’ problems in AI ethics and policy. They argue that such collaboration was key to the policy success of past epistemic communities. They suggest that researchers in these two communities disagree on fewer overarching points than they think; and where they do, they can and should bridge underlying theoretical disagreements. In doing so, the authors propose to draw on the principle of an ‘incompletely theorized agreement’, which can support urgently-needed cooperation on projects or areas of mutual interest, which can support the pursuit of responsible and beneficial AI for both the near- and long-term.
To delve deeper, read the full summary here.
Theorizing Femininity in AI: A Framework for Undoing Technology’s Gender Troubles by Daniel M. Sutko
The predominance of feminized voice assistants points to AI’s tendency to naturalise gender divisions. This paper draws on the science fiction narrative of Her and Tomorrow’s Eve to offer a critical understanding of how femininity serves as a means of domesticating AI all the while reproducing gender relations.
To delve deeper, read the full summary here.
The Ethics Owners — A New Model of Organizational Responsibility in Data-Driven Technology Companies by Emanuel Moss and Jacob Metcalf
The responsibility of ethics owners is to navigate challenging ethical circumstances with technical tools to manage and work within the tech company infrastructure. The report highlights the main findings from an ethnographic study of how professionals navigate ethical dilemmas in tech companies. The report highlights the importance of navigating systems of governance and complex company structure while pushing forward the agenda of justice and anti-oppression to support individuals and communities who need to be seen and heard for a more ethical future.
To delve deeper, read the full summary here.
📰 Article summaries:
What Buddhism can do for AI Ethics (MIT Tech Review)
As argued in a piece that I co-wrote with my colleague Victoria Heath, the current crop of guidelines and principles in the field of AI ethics are centred on largely Western perspectives. In this article, the author highlights some of the values from Buddhism that can help us shed some new light on how to approach the building of responsible AI systems. In particular, the values of self-cultivation as an underpinning of compassion, accountability, and no-harm is a key contribution to this thought process.
While the overarching values are the same because we are all human and share a lot of ideals, the value of self-cultivation as fuel for pushing forward the efficacy of the other values is something that everyone can learn more from. As we’ve seen with a lot of principles, sometimes it can become hard to translate them into practice and this is where I think such an approach can actually make them more actionable. One might ask if there is are simple rules that we can utilize to start on this journey, and taking the example of facial recognition technology and applying the principle of no-harm, the article argues that it should only be used in case we are able to show that it does actually reduce suffering rather than being used to surveil and oppress people.
Mostly, my takeaway from the article is that there is a lot that can be learned from different schools of thought, and working with a diverse group of experts coming from different parts of the world will be a great way of making that happen in practice.
How Social Media’s Obsession with Scale Supercharged Disinformation (Harvard Business Review)
An unfortunate series of incidents finally precipitated action from the social media platforms that led to a ban on Donald Trump’s account. It can be argued fairly well that this is a problem that has been building for years and this article makes a case for how the inherent structuring and incentives of social media platforms and the growth strategies that they adopted led to where we are today.
Achieving scale, a common Silicon Valley aspiration, and venture capital requirement, meant that social media platforms optimized for anything that allowed them to bring on evermore users to the platform and keep them there through hacking their attention and evoking enough emotional responses that they wouldn’t want to navigate away. Keeping the platform open and allowing anyone the ability to post content without much moderation in the service of meeting some of these “growth hacking” targets meant that user-generated content quickly grew to a scale where human moderation is no longer possible.
When advertisers realized the potential of these platforms, especially its political implications, not only did this bring in serious dollars but also boosted the prevalence of disinformation on the platform to the point where it started to materially harm the quality of experience that people had on the platforms. For years the platform companies evaded their moral responsibility to monitor for the content that was on the platform and it all came to head in early January with the unfortunate and avoidable loss of lives as insurrectionists stormed the Capitol in the US. Hopefully, we have meaningful changes that get enacted to prevent such tragedies from occurring in the future.
China wants to build an open source ecosystem to rival GitHub (Rest of World)
For a while now there have been extended talks on the splintering of the internet into regions that are based on the regulatory frameworks that are applied to the operations of the different platforms. The most dominant and pervasive such fragment is the Chinese ecosystem which has equivalents for a lot of the popular platforms like Facebook, Twitter, Google, among others. GitHub, the open-source code hosting platform, is another example where a local Chinese equivalent called Gitee is being developed and backed by some prominent local companies in the interest of maintaining the tradition of open-source though one that is housed internally within China.
While so far GitHub continues to remain accessible within China, concerns are frequently raised when it comes to how sometimes content like messages going against the government’s perspectives among other documentation of harms is stored on the platform to save it from censorship on some of the home-grown platforms might lead to it being taken down within the Chinese walled garden.
Without a doubt, open-source code has strengthened the Chinese technology ecosystem but such a fragmentation where perhaps everyone outside China stores their open-source code on platforms like GitHub and GitLab whereas developers in China store theirs on a platform like Gitee might lead to a lack of knowledge sharing and application of best practices because people get siloed. It would go against the spirit of the open-source community which aims to share code with each other in an unrestricted manner encouraging collaboration so that we build on each other’s work rather than replicating efforts.
Facial Recognition Technology Isn’t Good Just Because It’s Used to Arrest Neo-Nazis (Slate)
The AI Ethics Brief has talked about the ills of facial recognition of technology on many occasions and has also mentioned the positive use cases that arise on certain occasions. But, when we weigh the positives against the negatives, we see that the scales are tipped unequivocally towards the negatives. With the tragic incidents at the US Capitol in January 2021, some people resurfaced the debate mentioning how the use of facial recognition technology helped us arrest some of the insurrectionists and bring them to justice. One of the points made in this article that hasn’t been explicitly called out in many places is that facial recognition technology was only a piece of the puzzle and a lot of the evidence also came from posts that were made on Instagram and Facebook which helped to bring people to justice.
Time and again, researchers, activists, and others have demonstrated that facial recognition technology is particularly biased and has many flaws yet the debate keeps coming up again and again. One of the things that need to be analyzed is that even when it might be used in a positive way as it was in this case, there are far too many possibilities of things going wrong in terms of misidentification as happened repeatedly in 2020 leading to false arrests. Some people also argue that the only way to combat the use of facial recognition technology is by also giving it into the hands of everyday people. For example, the case in Portland where a person used it to identify erring police officers.
But, even if we have bans on the use of this technology in a particular regime, because of the amount of money that organizations stand to make from selling these technologies, we need to make sure that there is collective action at a global level that stops the development and deployment of this technology. As global readers, we see you as a core actor in the call for addressing the points of this debate in an informed manner.
The High Price of Mistrust (Farnam Street)
While not specifically talking about the information ecosystem, I find that this article does a great job of laying down some of the fundamentals when it comes to addressing the problems that arise when we have a fragmented information ecosystem that is littered with problematic information and subsequently sows mistrust amongst people.
Drawing on economic theory, the article raises the point of the rise in transaction costs (and potential degradation of the user experience) as we have less and less to trust when people share information online. This comes in the form of a larger onus on the users to prove what they are saying is authentic and verifiable. The recent launch of BirdWatch from Twitter acts as a community policing mechanism, but it does add to that burden of interacting online. The article provides an example of how in the past you could rely on your neighbors for securing essential support and goods rather than having to purchase, say, your own tools since you could rely on generalized reciprocity, a concept like money that overcame some of the shortcomings of the barter system that required one-one immediate matching of needs for a transaction to take place.
In those smaller communities, there were also the mechanisms of reputation and repeated interactions that kept the levels of deviant behaviour to a minimum. In a world where we now have disposable identities online and the potential to engage in one-time interactions, it becomes much easier to flout those rules and social contracts in the favor of achieving perhaps narrow goals of causing harm in the near-term without the potential for any long-term consequences or accountability.
Will Parler Prevail in Its Antitrust Case Against Amazon? (Knowledge @ Wharton)
One of the more definitive actions on behalf of large technology companies to curb the spread of hate speech online, the action by Amazon to de-platform Parler and revoke their access to use their cloud services put a halt to the pervasiveness of problematic information including hate speech that was spreading unchecked on Parler. It was swiftly followed by Parler trying to take legal action against Amazon alleging that they did so to favor Twitter over them and that they had a bias against conservatives.
Yet, as highlighted in the article here, the antitrust allegations are flimsy at best and require a strong backing by facts before the courts would consider the case as viable for adjudication. The defense that Amazon has mounted in the face of this case is that they had issued a sufficient number of warnings to monitor and remove speech that violated community guidelines.
Ultimately, this raises important questions on the power that large technology providers such as Amazon have and the responsibility that should go with that power (if there should be something to that effect) to help the technology ecosystem achieve a healthier posture. With lots of talk around Section 230 in the US and other pieces of legislation around the world, perhaps 2021 will be the year where there will be a clearer understanding of the roles and responsibilities that these companies have in helping us create an information ecosystem that supports our welfare and well-being rather than devolving into a toxic cesspool.
From elsewhere on the web:
Why companies are thinking twice about using artificial intelligence (Fortune)
Our founder Abhishek Gupta is quoted in this piece, which looks at the U.S. Capitol riot, disinformation, and how AI fits into it all.
The Abuse and Misogynoir Playbook (MIT Media Lab)
The MIT Media Lab covered the opening piece of our report (co-authored by MIT’s Danielle Wood, Katlyn Turner, and Catherine D’Ignazio) which goes into detail about Dr. Timnit Gebru’s firing, describes the Playbook tactics Google used, and puts the tactics in historical relief with contributions by other Black women.
In case you missed it:
The State of AI Ethics Report (January 2021)
The State of AI Ethics Report (January 2021) captures the most relevant developments in AI Ethics since October of 2020. To save you time and quickly get you up to speed on what happened in the past quarter, we’ve distilled the research & reporting around 8 key themes.
The report includes exclusive content written by world-class AI Ethics experts. This edition opens with The Abuse and Misogynoir Playbook — a 20-page joint piece by a group of MIT professors & research scientists (Danielle Wood, Katlyn Turner, Catherine D’Ignazio) about the mistreatment of Dr. Timnit Gebru by Google and the broader historical significance around this event.
To delve deeper, read the full report here.
Guest contribution (poem) by Ramya Malur Srinivasan:
A candid talk
People say I can do this and that
Without knowing what I am really good at!
So, I thought of clearing the air
Hoping to make everyone a little more aware!
My knowledge is limited to the data I see
Using which I predict how things can be
So if my data has cars only on the road
I might fail to identify a car by the shore!
If the data only shows doctors as men
Little can I know that doctors can also be women
I am not yet good at inferring cause and consequence
I do not possess a great deal of common sense!
I need lots of data to learn
From this, patterns I churn
So while humans can recognize objects by just looking once
I need images in tons!
Of course, I am useful to humans in many ways
But that need not be true always
I have described my drawbacks in a manner quite frank
So that my predictions are not followed point-blank!
If you’ve got an informed opinion on the impact of AI on society, consider writing a guest post for our community — email support@montrealethics.ai and we’ll get back to you.
Take Action:
Events:
Can We Engineer Ethical Artificial Intelligence?
The Montreal AI Ethics Institute is partnering with NEOACM to consult the public about whether we can engineer ethical AI, if at all. The discussion will span across topics including the ambiguity of ethical AI, how to filter through the hype, and how AI ethics can be implemented into the professional ethics of engineers.
📅 February 5th (Fri) from 1 PM - 2:30 PM EST.
🎫 Get tickets
If you know of events that you think we should feature, let us know at support@montrealethics.ai
Consider supporting our work through Substack
We’re trying a Guardian/Wikipedia-style tipping model: everyone will continue to get the same content but those who can pay for a subscription will support free access (to our newsletter & events) for everyone else. Each contribution, however big or small, helps ensure we can continue to produce quality research, content, and events.
NOTE: When you hit the subscribe button below, you will end up on the Substack page where if you’re not already signed into Substack, you’ll be asked to enter your email address again. Please do so and you’ll be directed to the page where you can purchase the subscription.
Support our work today with as little as $5/month
If you’d prefer to make a one-time donation, visit our donation page or click the button below.
Signing off for this week, we look forward to it again in a week! If you enjoyed this and know someone else that can benefit from this newsletter, please share it with them!
If someone has forwarded this to you and you want to get one delivered to you every week, you can subscribe to receive this newsletter by clicking below:
If you have feedback for this newsletter or think there’s something interesting we missed, email us at support@montrealethics.ai