AI Ethics #1: Hello World! Relational ethics, misinformation, animism and more ...
Our first weekly edition covering research and news in the world of AI Ethics
Welcome to the first edition of our weekly newsletter that will help you navigate the fast changing world of AI Ethics! Every week we'll be diving into research papers that caught our eye, sharing a summary of those with you and presenting our thoughts on how it links with other work in the research landscape. We'll also be sharing brief thoughts on interesting articles and developments in the field.
If someone has forwarded this to you and you want to get one delivered to you every week, you can subscribe to receive this newsletter by clicking below
Since this is our first edition, we'd like to start off with a brief introduction to who we are!
Our mission:
Our mission is to help define humanity’s place in a world increasingly characterized and driven by algorithms. We do this by creating tangible and applied technical and policy research in the ethical, safe and inclusive development of AI. Our unique advantage in Montreal is that we are situated globally at the leading edge of technical research while leveraging strong Canadian values of diversity and inclusion.
“Treating AI as inherently good overlooks the important research and development needed for ethical, safe and inclusive applications. Poor data, inexplicable code or rushed deployment can easily lead to AI systems that are not worth celebrating.”
- Abhishek Gupta, World Economic Forum
Our Values:
We are focused on the applied and practical, not theoretical.
We enable citizen empowerment to enhance policy development on the ethical, safe and inclusive development of AI.
We act as a pool of knowledge and resources to enable applied experiments that will build tangible frameworks to address ethical, safety and inclusion issues in AI development.
We publish all our research open-source and strive for scientific and technical reproducibility.
More about us on: https://montrealethics.ai/about/
Research papers:
Let's look at some highlights of research papers that caught our attention at MAIEI:
Beyond Near- and Long-Term: Towards a Clearer Account of Research Priorities in AI Ethics and Society by Carina Prunkl and Jess Whittlestone
A much needed paper shedding light in a polarized research and practice community that can clearly benefit from more collaboration and greater understanding of each other's work. The paper proposes a multi dimensional spectrum based approach to delineate near and long term AI research along the lines of extremity, capability, certainty and impact. Additionally, it asks for more rigour from the community when communicating their research agendas and motives to allow for greater understanding between this artificial divide. While elucidating differences along these different axes and visualizing them reveals how misunderstanding arises, it also highlights ignored yet important research areas, ones that the authors are focused on.
To delve deeper, read our full summary here.
Algorithmic Injustices towards a Relational Ethics by Abebe Birhane and Fred Cummins
This paper presented at the Black in AI workshop at NeurIPS 2019 elucidates how the current paradigm in research on building fair, inclusive AI systems falls short in addressing the real problems because of taking a narrow, technically focused approach. The paper utilizes a relational ethics approach to highlight areas of improvement. The key arguments emerging from such a characterization are centring the population that is going to be disproportionately impacted, focusing on understanding of underlying context rather than pure predictive power of the systems, viewing the algorithmic systems as a tool that can shape and sustain social and moral order and recognizing the temporal nature of the definitions of bias, fairness, etc and keeping the design and development of the systems as an iterative process.
To delve deeper, read our full summary here.
AI Mediated Exchange Theory by Xiao Ma and Taylor W. Brown
The paper puts forth a framework that extends the well studied Social Exchange Theory (SET) to study human-AI interactions via mediation mechanisms. The authors make a case for how current research needs more interdisciplinary collaboration between technical and social science scholars stemming from a lack of shared taxonomy that places research in similar areas on separate grounds. They propose two axes of human/AI and micro/macro perspectives to visualize how researchers might better collaborate with each other. Additionally, they make a case for how AI agents can mediate transactions between humans and create potential social value as an emergent property of those mediated transactions.
To delve deeper, read our full summary here.
Articles:
Let’s look at highlights of some recent articles that we found interesting at MAIEI:
NeurIPS requires AI researchers to account for societal impact and financial conflicts of interest
For the first time, there's a call for the technical community to include a social impact statement from their work which has sparked a debate amongst camps that are arguing to leave such a declaration to experts who study ethics in machine learning and those that see this as a positive step in bridging the gap between the social sciences and the technical domains. Specifically, we see this as a great first step in bringing accountability closer to the origin of the work. Additionally, it would be a great way to build a shared vernacular across the human and technical sciences easing future collaboration.
Ancient animistic beliefs live on in our intimacy with tech
The article brings up some interesting points around how we bond with things that are not necessarily sentient and how our emotions are not discriminating when it comes to reducing loneliness and imprinting on inanimate objects. People experience surges in oxytocin as a consequence of such a bonding experience which further reinforces the relationship. This has effects for how increasingly sentient-appearing AI systems might be used to manipulate humans into a “relationship” and potentially steer them towards making purchases, for example via chatbot interfaces by evoking a sense of trust. The article also makes a point about how such behaviour is akin to animism and in a sense forms a response to loneliness in the digital realm, allowing us to continue to hone our empathy skills for where they really matter, with other human beings.
Study: Facebook’s fake news labels have a fatal flaw
The article gives an explanation for why truth labels on stories are not as effective as we might think them to be because of something called the implied truth effect. Essentially, it states that when some things are marked as explicitly false and other false stories aren't, people believe them to be true even if they are outright false because of the lack of a label. Fact checking all stories manually is an insurmountable task for any platform and the authors of the study mention a few approaches that could potentially mitigate the spread of false content but none are a silver bullet. There is an ongoing and active community that researches how we might more effectively dispel disinformation but it's nascent and with the proliferation of AI systems, more work needs to be done in this arms race of building tools vs increasing capabilities of systems to generate believable fake content.
How spreaders of misinformation acquire influence online
The article provides a taxonomy of communities that spread misinformation online and how they differ in their intentions and motivations. Subsequently, different strategies can be deployed in countering the disinformation originating from these communities. There isn’t a one-size-fits-all solution that would have been the case had the distribution and types of the communities been homogenous. The degree of influence that each of the communities wield is a function of 5 types of capital: economic, social, cultural, time and algorithmic, definitions of which are provided in the article. Understanding all these factors is crucial in combating misinformation where different capital forms can be used in different proportions to achieve the desired results, something that will prove to be useful in addressing disinformation around the current COVID-19 situation.
The Second Wave of Algorithmic Accountability
The article dives into explaining how the rising interest in ensuring fair, transparent, ethical AI systems that are held accountable via various mechanisms advocated by research in legal and technical domains constitutes the “first wave” of algorithmic accountability that challenges existing systems. Actions as a part of this wave need to be carried out incessantly with constant vigilance of the deployment of AI systems to avoid negative social outcomes. But, we also need to challenge why we have these systems in the first place, and if they can be replaced with something better. As an example, instead of making the facial recognition systems more inclusive, given the fact that they cause social stratification perhaps they shouldn’t be used at all. A great point made by the article is that under the veneer of mainstream economic and AI rationalizations, we obscure broken social systems which ultimately harm society at a more systemic level.
Can I Opt Out of Facial Scans at the Airport?
There is a clear economic and convenience case to be made (albeit for the majority, not for those that are judged to be minorities by the system and hence get subpar performance from the system) where you get faster processing and boarding times when trying to catch a flight. Yet, for those that are more data-privacy minded, there is an option to opt-out though leveraging that option doesn’t necessarily mean that the alternative will be easy, as the article points out, travelers have experienced delays and confusion from the airport staff. Often, the alternatives are not presented as an option to travelers giving a false impression that people have to submit to facial recognition systems. Some civil rights and ethics researchers tested the system and got varying mileage out of their experiences but urge people to exercise the option to push back against technological surveillance.
Events:
As a part of our public competence building efforts, we host events frequently spanning different subjects as it relates to building responsible AI systems, you can see a complete list here: https://montrealethics.ai/meetup
Here’s our next event on the subject of Disinformation and how it spreads, you can sign up to attend by clicking on the below:
Signing off for this week, we look forward to it again in a week! If you enjoyed this and know someone else that can benefit from this newsletter, please share it with them!
If you have feedback for this newsletter or think there is an interesting piece of research, development or event that we missed, please feel free to email us at support@montrealethics.ai
If someone has forwarded this to you and you like what you read, you can subscribe to receive this weekly newsletter by clicking below