The AI Ethics Brief #39: Sociology of AI ethics, robot testimony, believability gap, and more ...

Has the Turing Test become obsolete?

Welcome to another edition of the Montreal AI Ethics Institute’s weekly AI Ethics Brief that will help you keep up with the fast-changing world of AI Ethics! Every week we summarize the best of AI Ethics research and reporting, along with some commentary. More about us at montrealethics.ai/about.


*NOTE: We finally named our weekly newsletter! It’s now The AI Ethics Brief. Thanks for continuing to read, share, and support it.


Consider supporting our work through Substack

We’re trying a Guardian/Wikipedia-style tipping model: everyone will continue to get the same content but those who can pay for a subscription will support free access (to our newsletter & events) for everyone else. Each contribution, however big or small, helps ensure we can continue to produce quality research, content, and events.

Read why Paul Morrissey (Global Ambassador for AI & Big Data Analytics at TM Forum) became a founding supporter of our newsletter:

NOTE: When you hit the subscribe button below, you will end up on the Substack page where if you’re not already signed into Substack, you’ll be asked to enter your email address again. Please do so and you’ll be directed to the page where you can purchase the subscription.

Support our work today with as little as $5/month


This week’s overview:

What we are thinking:

  • The Sociology of AI Ethics

  • To achieve Responsible AI, close the "believability gap"

Research summaries:

  • Reliabilism and the Testimony of Robots

  • Artificial Intelligence – Application to the Sports Industry

Article summaries:

  • The year algorithms escaped quarantine: 2020 in review (AlgorithmWatch)

  • This is the Stanford vaccine algorithm that left out frontline doctors (MIT Tech Review)

  • How Your Digital Trails Wind Up in the Police’s Hands (Wired)

  • Another Arrest, and Jail Time, Due to a Bad Facial Recognition Match (NY Times)

  • The Turing Test is obsolete. It’s time to build a new barometer for AI (Fast Company)

  • Timnit Gebru’s Exit From Google Exposes a Crisis in AI (Wired)


But first, our call-to-action of the week:

The Montreal AI Ethics Institute is committed to democratizing AI Ethics literacy. But we can’t do it alone.

Every dollar you donate helps us pay for our staff and tech stack, which make everything we do possible.

With your support, we’ll be able to:

  • Run more events and create more content

  • Use software that respects our readers’ data privacy

  • Build the most engaged AI Ethics community in the world

Please make a donation today.

Donate


What we’re thinking:

The Sociology of AI Ethics by Iga Kozlowska, Nga Than, and Abhishek Gupta

AI research, development and deployment have “a social sciences deficit.” Predominantly AI systems are built by technologists, using datasets that are often divorced from the collection contexts. After they have been developed and deployed in society, social scientists then evaluate the societal harms and benefits. We believe social science perspectives should be introduced earlier in the AI development workflow, from conception to development to deployment as well as maintenance. The authors of this column will bring sociological perspectives to the AI and AI ethics communities. 

To delve deeper, read the full article here.

To achieve Responsible AI, close the "believability gap"​ by Abhishek Gupta

We've had an outpouring of interest in the field of AI ethics over 2019 and 2020 which has led to many people sharing insights, best practices, tips and tricks, etc. that can help us achieve Responsible AI.

But, as we head into 2021, it seems that there are still huge gaps in how AI ethics is being operationalized. A part of this stems from what I call the believability gap that needs to be bridged before we can realize our goal of having widespread adoption of these practices in a way that actually creates positive change.

To delve deeper, read the full article here.


Research summaries:

Reliabilism and the Testimony of Robots by Billy Wheeler

In this paper, the author Billy Wheeler asks whether we should treat the knowledge gained from robots as a form of testimonial versus instrument-based knowledge. In other words, should we consider robots as able to offer testimony, or are they simply instruments similar to calculators or thermostats? Seeing robots as a source of testimony could shape the epistemic and social relations we have with them. The author’s main suggestion in this paper is that some robots can be seen as capable of testimony because they share the following human-like characteristic: their ability to be a source of epistemic trust. 

To delve deeper, read the full summary here.

Artificial Intelligence – Application to the Sports Industry by Andrew Barlow, Sathesh Sriskandarajah

PwC’s AI in sport report demonstrates how AI has become a mainstay of sport today given its analytical prowess. Whether this is to be the case or not is then considered within the paper, as well as AI’s numerous achievements in the sporting field. With such achievements and digitalization becoming more prominent in itself, the human aspect of sports is seen to become more fragile than ever before, with no sign of an end to technology’s encroachment.

To delve deeper, read the full summary here.


Article summaries:

The year algorithms escaped quarantine: 2020 in review (AlgorithmWatch)

As we covered in one of the previous editions of the newsletter as to how 2020 was a breakout year for the deployment of automated systems, this article from AlgorithmWatch sheds some light on a few things that went right and a lot of things that went wrong. Lots of bold claims were made by companies when it came to their ability to combat the pandemic using the services that they offer via their automated systems. For example, BlueDot claimed to have detected the pandemic before anyone else but those claims remain unaudited. And this has been the flavor for a lot of the automated systems that are being pushed out. Unsuspecting governments and other entities looking to gain an edge over the pandemic trusted the “snake oil” that they were being sold by cunning entrepreneurs (a clever phrase from AW) into purchasing solutions that were yet to be battle-tested. 

Detecting temperature and subsequently, the potential for someone to be infected from video feeds was a subversive sell of surveillance software. And many countries rolled back hasty deployments after facing pressure from their populace. One of the reasons that the Montreal AI Ethics Institute stresses the need to have civic competence in AI ethics is because of that - the more aware we are as everyday citizens, the more we can meaningfully push back against systems that violate basic rights. 

Especially with the use of automated systems in government services like unemployment services in Austria rightfully faced backlash when a nebulous “employability” score was used to rank people seeking help. Amsterdam and Helsinki led the way in terms of publicly documenting the algorithmic systems that are being used to provide a greater degree of transparency, something that a lot more places can start to emulate as a starting point. A chilling statement in the article that really caught my attention was how with the failure of some of the traditional means of communications in the wake of other natural disasters (yes, there was more than just the pandemic that ravaged 2020), we left a lot of essential communication and rescue efforts to the whims of newsfeed algorithms, something that we need to be more conscious of.

This is the Stanford vaccine algorithm that left out frontline doctors (MIT Tech Review)

We often talk in this newsletter about complicated systems that are based on probabilistic underpinnings that are hard to scrutinize because of their black-box nature. But, even rules-based algorithms that can be summarized on a slide (as shown in this article) have the potential to wreak havoc when they are wrapped in opacity around how those criteria were picked and how it is being applied. Frontline healthcare workers who have helped us maintain a sense of normalcy by keeping us safe and providing help to those who need it the most, often at great risk to their own well-being were conspicuously absent from the first round of vaccinations that were handed out at Stanford. 

Residents rightly protested this by highlighting how the system favored those who were administrators and doctors working from home rather than those who were in critical positions dealing with high risks of exposure. This was exacerbated by the fact there wasn’t a diversity of people who were involved in the creation of that formula. It was also made unnecessarily complicated when other hospitals followed much simpler formulations in the interest of getting the vaccine as quickly as possible to those who need it the most. 

One thing that particularly caught my attention was how administrators tried to hide behind the fact that the system was complicated in justifying why they made this egregious decision. Something that we might see happening in the future is people using this as a crutch to rationalize why they aren’t taking more decisive and transparent actions when it comes to the well-being of people. What was particularly heartening was to see the residents come out and call out the administration on it. But, this shouldn’t lead to the normalization of the affectees always having to take on the burden to defend their rights; those in power should take on that responsibility, especially when they are a part of an organization that is intended to function for the welfare of the community in which they are situated.

How Your Digital Trails Wind Up in the Police’s Hands (Wired)

With the recent release of guidelines from Apple on how developers should comply to protect the personal information of its users, we see how sensitive our digital trails are and what impacts they can have on people’s lives. In the article, the author mentions a case in the US where smartphone data was used as the primary source of information to provide evidence about a crime that took place based on keyword search warrants that first yield lists of anonymized users and then the police can ask for the results to be narrowed down so that they can hone in on the suspects. This is also supplemented by geofence warrants that provide a lot of spatially sensitive information about the goings-about of people. 

Another consideration is the risk that such warrants pose to the privacy of those who are not ultimately deemed suspects but whose information was shared nonetheless as a part of the investigation. Additionally, such approaches supercharge the ability of law enforcement to use existing legal instruments in novel ways to exercise more power than is perhaps mandated by law. 

Data collected for innocuous purposes like showing you the local weather could be weaponized against individuals to figure out their visitation and travel patterns in different locations. This is only exacerbated by the fact that there is a lack of clear guidelines on how personal data should be handled and used. In speaking with developers, researchers and journalists found that most are quite unaware of the downstream uses of data by third parties that they integrate to provide services within their apps. A call for transparency on the part of the companies when they are queried to share information about their users is a starting point to balance the asymmetry of power in the information ecosystem.

Another Arrest, and Jail Time, Due to a Bad Facial Recognition Match (NY Times)

Are we bound to continue repeating mistakes even when obvious flaws are pointed out to us? It seems to be the case with facial recognition technology that users of this technology just can’t get their minds wrapped around the current pitfalls in its implementation and how it disproportionately harms those who are the most marginalized in our society. What was appalling in this case is that this is the third instance where someone was wrongly arrested based on evidence from a facial recognition technology match; in each case, the victim was a Black man. The performance of facial recognition technology systems on minority populations is notoriously bad yet we continue to see its use by law enforcement which is endangering the lives of innocent people. 

In this case, there was a double whammy whereby Mr. Parks, the victim in this case, was also subjected to an algorithmic system’s decision after being falsely arrested because of the facial recognition technology system and wasn’t allowed to leave jail awaiting trial because of his prior run-ins with the law. The system deemed him to be risky rather than requiring monetary bail as is the case in New Jersey where he was arrested. 

What was heartbreaking in reading this article was that Mr. Parks considered taking a plea deal even though he knew he was innocent because he didn’t want to risk having to face a very severe and long sentence because of his priors. A system that compounds problems is certainly not what the facial recognition technology systems are sold as; law enforcement presumably procures them to help citizens but that is certainly not what is actually happening and just as several states in the US have called for moratoriums on the use of this technology, we need to make sure that we have consistent application of these calls rather than a piecemeal approach which is letting some folks slip through the cracks.

The Turing Test is obsolete. It’s time to build a new barometer for AI (Fast Company)

For those who are new to the field of AI ethics, you often work with questions and a high degree of interest in whether the Singularity is coming or whether the different chatbots that we see around us are “intelligent” enough to pass the Turing Test. The basic premise of the Turing Test is for a chatbot to hold an extended conversation with a human and convince the humans that they are speaking with a human rather than a machine. Yet, that notion might seem outdated now given the ubiquity of intelligent applications around us. And in some cases, we might not even want to be fooled into believing that something is human when it isn’t. 

One of the great things about this article is that it highlights the different era in which this test was put forward when computing was limited, expensive, and sensors weren’t as widespread as they are today. So, we might be artificially constraining the intelligence test of an AI system to just text when in fact today’s systems can combine multimodal information which adds a few other dimensions. In addition, we might also be limiting the true capabilities of the system today to make it appear human when answers to several fact-based things are instantaneous given access to cloud computing rather than having to insert artificial pauses to make it appear more human when responding to a query about the distance between two cities as an example. 

Finally, thinking about the goals that such a test sets for the developers in the field is also something to consider. Especially with the spread of misinformation that is fueled by the presence of chatbots and other automated mechanisms, we might want to be careful about setting the right incentives so that the technology is actually used for beneficial purposes rather than to trick people. Think about what havoc can be wrecked with the combination of chatbots that are backed by GPT-3 producing some very believable interactions with humans.

Timnit Gebru’s Exit From Google Exposes a Crisis in AI (Wired)

A huge specter that loomed over the field of AI ethics in 2020 and leading well into 2021 was related to the firing of Dr. Gebru from Google. While many articles have been written about what happened and the inaction on the part of Google to shield and protect the very people who are helping to guide their AI research efforts in the right direction, much less had been said about what we can do together on the ecosystem level to make changes. 

This article from a colleague of Dr. Gebru makes it quite clear that we can all play some part in making the necessary changes that will help to steer the direction of the development and deployment of such technologies in a way that benefits society rather than harm those who are already disadvantaged because of structural inequities. As I highlighted in a piece that I wrote earlier, we need to become more aware of where we might be the subject of harm coming from automated systems, and questioning critically the design of the systems in relation to the environment surrounding them will help us move towards action that actually triggers change rather than just having even more conversations that are theoretical and divorced from reality. 

Some of the suggestions put forth by the authors in the article include exploring the idea of having workers unions for tech workers. Especially workers in AI who have specialized skills that are hard to replace, organizing their labor strength to demand positive action can be a great way to counter some of the negative effects. Then, there are also the calls that we can make to our local and national policymakers to put forward meaningful regulations that centres the welfare of people above pure profit motivations. These need to be accompanied by accountability and repercussions to ensure enforceability. Funding for some of the initiatives that can help protect these worker rights can come from taxation on the giants.

In a world where highly-respected scholars in the field are not safe in the work that they do, we risk significantly our ability to effectively regulate and guide the development of technologies that are having a significant impact on our lives. We need more urgent action and perhaps 2021 is the year when we make that happen together as a community.  


From elsewhere on the web:

Civil society calls for AI red lines in the European Union’s Artificial Intelligence proposal (EDRi)

Read the open letter from EDRi to the EU Commission, signed by 60+ civil society orgs (including us), demanding AI red lines for applications threatening fundamental human rights like surveillance, immigration, social scoring, and predictive policing.

Why civic competence in AI ethics is needed in 2021 (Towards Data Science)

Our founder Abhishek Gupta details what you can do at a grassroots level to make a difference in AI ethics, and why civic competence is needed now more than ever.

‘At discussions on AI ethics, you’d be hard-pressed to find anyone with a background in anthropology or sociology’ (The Times of India)

In this interview, Abhishek talks about the unique understanding that social scientists bring to AI ethics regarding bias, inequity, how tech interacts with communities, and the social safety nets needed to address labour impacts.

In case you missed it:

Fairness in Clustering with Multiple Sensitive Attributes by Savitha Sam Abraham, Deepak P., Sowmya S Sundaram

With the expansion and volume of readily available data, scientists have set in place AI techniques that can quickly categorize individuals based on shared characteristics. A common task in unsupervised machine learning, known as clustering, is to identify similarities in raw data and group these data points into clusters. As is frequently the case with decision-making algorithms, notions of fairness are put into the forefront when discriminatory patterns and homogenous groupings start to appear.

For data scientists and statisticians, the challenge is to develop fair clustering techniques that can protect the sensitive attributes of a given population, with each of these sensitive attributes in a cluster proportionately reflecting that of the overall dataset. Until recently, it appeared mathematically improbable to achieve statistical parity when balancing more than one sensitive attribute. The authors offer an innovative statistical method, called Fair K-Means, that can account for multiple multi-valued or numeric sensitive attributes. Fair K-Means can bridge the gap between previously believed incompatible notions of fairness.

To delve deeper, read the full summary here.

Guest post:

If you’ve got an informed opinion on the impact of AI on society, consider writing a guest post for our community — just send your pitch to support@montrealethics.ai. You can pitch us an idea before you write, or a completed draft.


Take Action:

Events:

What can AI do for your business?

If you want to learn about using AI for business, check out this 2-day intensive workshop facilitated by our co-founders Abhishek Gupta and Renjie Butalid in partnership with YES Montreal.

📅 January 30, 31 from 9:00 AM - 12:30 PM EST.
🎫 Get tickets (@ 50% discount)

If you know of events that you think we should feature, let us know at support@montrealethics.ai

Consider supporting our work through Substack

We’re trying a Guardian/Wikipedia-style tipping model: everyone will continue to get the same content but those who can pay for a subscription will support free access (to our newsletter & events) for everyone else. Each contribution, however big or small, helps ensure we can continue to produce quality research, content, and events.

NOTE: When you hit the subscribe button below, you will end up on the Substack page where if you’re not already signed into Substack, you’ll be asked to enter your email address again. Please do so and you’ll be directed to the page where you can purchase the subscription.

Support our work today with as little as $5/month

If you’d prefer to make a one-time donation, visit our donation page or click the button below.

Donate


Signing off for this week, we look forward to it again in a week! If you enjoyed this and know someone else that can benefit from this newsletter, please share it with them!

Share The AI Ethics Brief


If someone has forwarded this to you and you want to get one delivered to you every week, you can subscribe to receive this newsletter by clicking below:


If you have feedback for this newsletter or think there’s something interesting we missed, email us at support@montrealethics.ai