AI Ethics Brief #97: Sustainable conversational AI, hiring algorithms and junk science, online ban evasion, and more ...
What struggles for recognition do we face in the era of facial recognition technologies?
Welcome to another edition of the Montreal AI Ethics Institute’s weekly AI Ethics Brief that will help you keep up with the fast-changing world of AI Ethics! Every week we summarize the best of AI Ethics research and reporting, along with some commentary. More about us at montrealethics.ai/about.
⏰ This week’s Brief is a ~33-minute read.
Support our work through Substack
💖 To keep our content free for everyone, we ask those who can, to support us: become a paying subscriber for as little as $5/month. If you’d prefer to make a one-time donation, visit our donation page. We use this Wikipedia-style tipping model to support the democratization of AI ethics literacy and to ensure we can continue to serve our community.
*NOTE: When you hit the subscribe button, you may end up on the Substack page where if you’re not already signed into Substack, you’ll be asked to enter your email address again. Please do so and you’ll be directed to the page where you can purchase the subscription.
Hi readers! We are really excited to be returning after an unplanned break that was precipitated by a few changes with each of our staff’s lives (all positive!). Abhishek has started a new role at BCG as their Senior Responsible AI Leader & Expert, Masa has started a new role at EY as a Consultant on the People Advisory Services team, and Connor is wrapping up his final semester at University of Exeter.
We have missed you and your feedback on every edition that we publish, but we are glad to announce that there is a backlog of great content coming your way! In order to catch up, we will be publishing twice per week for the upcoming five weeks. This will take us up until AI Ethics Brief #107 on June 16th, 2022, at which point we will be returning to our regular weekly publishing schedule. We truly appreciate you sticking around with us and look forward to resuming our conversations on AI ethics. Thanks!
This week’s overview:
✍️ What we’re thinking:
Towards Sustainable Conversational AI
Hiring Algorithms Based on Junk Science May Cost You Your Dream Job
Connor’s Review of “South Korea as a Fourth Industrial Revolution Middle Power?”
🔬 Research summaries:
The struggle for recognition in the age of facial recognition technology
Characterizing, Detecting, and Predicting Online Ban Evasion
How the TAII Framework Could Influence the Amazon's Astro Home Robot Development
Algorithmic Auditing and Social Justice: Lessons from the History of Audit Studies
📰 Article summaries:
China uses AI software to improve its surveillance capabilities
New AI Regulations Are Coming. Is Your Organization Ready?
It will soon be easy for self-driving cars to hide in plain sight. We shouldn’t let them.
📖 Living Dictionary:
What is anthropomorphism?
🌐 From elsewhere on the web:
The Montreal AI Ethics Institute becomes a Partner at the Partnership on AI!
Could ethical AI help underrepresented groups get ahead at work?
💡 ICYMI
Avoiding an Oppressive Future of Machine Learning: A Design Theory for Emancipatory Assistants
But first, our call-to-action this week:
As some of you already know, I have taken on a new role as the Senior Responsible AI Leader & Expert at the Boston Consulting Group (BCG)! I will still continue my role at the Montreal AI Ethics Institute and I look forward to engaging with you all! You can learn more about what motivated my Responsible AI journey and what I’ll be doing at BCG.
✍️ What we’re thinking:
Towards Sustainable Conversational AI
Product launches are complex. Conversational AI, like many AI products, relies on large datasets and diverse teams to bring them to the finish line. However, these product launches are often iterative, requiring future optimization to satisfy customers’ needs.
Sustainable AI is the quest to build AI systems that are as compatible for a company as it is for the people they are serving. This type of sustainability means rethinking the lifecycle of AI products so that the technology can scale while minimizing costs to consumers, society and the environment.
Without the proper mechanisms in place, today’s big product launch can become yesterday’s embarrassing mistake. Beyond maintenance costs, concerns around integration, secure data gathering and ethical design must be addressed. Creating AI that is compatible with scarce resources requires proper AI governance, collaborative research and strategic deployment.
To delve deeper, read the full article here.
Hiring Algorithms Based on Junk Science May Cost You Your Dream Job
Remember the movie Minority Report? Starring Tom Cruise, the movie showed us with a world in which police apprehended people before they committed crimes based on the foreknowledge of three psychics.
What felt like a science fiction dystopia when the movie was released in 2002 had become a reality in US courtrooms soon after. The COMPAS algorithm predicts a defendant's risk of committing another misdemeanor or felony within 2 years of assessment based on 137 features about an individual and their criminal record. These predictions have been used in courtrooms around the US to inform decisions about bond amounts to the length of sentences, as reporting by ProPublica revealed in 2016.
The report produced a public outcry because the algorithm's output was biased against blacks, falsely flagging blacks a lot more frequently as future reoffenders than whites. This finding catalyzed a lot of important work on biases in algorithms. It has sensitized us to the problem of bias in algorithmic decision making and given rise to dozens of AI fairness tools that check algorithms for differential impact.
To delve deeper, read the full article here.
Connor Wright, Partnerships Manager at the Montreal AI Ethics Institute, shares his review of "South Korea as a Fourth Industrial Revolution Middle Power?"
🔬 Research summaries:
The struggle for recognition in the age of facial recognition technology
Facial recognition technology does not always live up to its promises – numerous examples show that it often fails to recognize people’s identity or characteristics. In this article, the author argues that such misrecognition by FRT can harm people’s self-respect and self-esteem.
To delve deeper, read the full summary here.
Characterizing, Detecting, and Predicting Online Ban Evasion
Online ban evasion is the act of circumventing suspensions on online platforms like Wikipedia, Twitter, and Facebook. Focusing specifically on malicious ban evasion, this paper (to appear in ACM WebConf 2022) employs machine learning and data science to understand the behavior of online ban evaders and develops tools that can help moderators in the early and reliable identification of instances of ban evasion.
To delve deeper, read the full summary here.
How the TAII Framework Could Influence the Amazon's Astro Home Robot Development
In September 2021, Amazon announced in its product event the new Amazon’s Astro robot, a home robot designed to be much more than a home security and safety device. The concerns about privacy and transparency started to raise among experts and users since the announcement of the prototype. This paper discusses the different risks and responsibilities Amazon has to face with this new product and how the Trustworthy Artificial Intelligence Implementation (TAII) could influence its development and the social impact which might result from its use, having as a reference the TAII Framework, which could help to achieve a product development based on ethics and a trustworthy AI system.
To delve deeper, read the full summary here.
Algorithmic Auditing and Social Justice: Lessons from the History of Audit Studies
Audit studies have a long and rich history within the social sciences. In this paper, the authors draw upon that history to see how it can inform algorithmic auditing, a way of examining discrimination in algorithms that is becoming increasingly popular.
To delve deeper, read the full summary here.
Jack Clark Presenting the 2022 AI Index Report
Who currently leads the way in terms of AI hiring? Is investment in AI simply concentrated in the same companies? Jack Clark guides us through the Stanford HAI Index, presenting the highlights and diving deeper into its individual chapters.
To delve deeper, read the full summary here.
📰 Article summaries:
China uses AI software to improve its surveillance capabilities
What happened: Beijing's current approach to surveillance collects data, but leaves the responsibility of organizing the data up to law enforcement. It is unable to connect one’s personal details to a real-time location, except at security checkpoints. The “one person, one file” AI software is drastically improving this system, as it is able to learn independently and optimize the accuracy of file creation. This is particularly important in the COVID-19 era, as faces that are partially blocked or masked can also be archived relatively accurately.
Why it matters: One person, one file can make sense of the giant amounts of data, using algorithms and machine learning to create customized files for individuals that also update themselves automatically as the software sorts data. The implications of this AI software are far-reaching. On one hand, Beijing says its monitoring is necessary to combat crime and fight the spread of COVID-19. On the other hand, 22 tech companies, including SenseTime, Huawei and Megvii now offer such software, with Huawei stating that a partner had developed the one person, one file application in its smart city platform.
Between the lines: Unfortunately, but unsurprisingly, the use of this software can lead to negative consequences. Human rights activists bring up the point that the country is building a surveillance state that not only infringes on privacy, but also unfairly targets certain groups, such the Uyghur Muslim minority. Since one person, one file would be coupled with facial recognition technology, the system could identify whether an individual was Uyghur and feed into early warning systems for the police. For example, one of the 50 tenders that were analyzed by Reuters was from a Party organ that sought a database of Uyghur and Tibetan residents to facilitate “finding the information of persons involved in terrorism.”
New AI Regulations Are Coming. Is Your Organization Ready?
What happened: Regulators and lawmakers have recently announced guidelines for regulating artificial intelligence, which has highlighted the rapidly changing nature of regulatory frameworks in this field. Three key trends have emerged amongst the majority of proposed laws on AI: (1) The requirement to conduct “algorithmic impact assessments” to document AI risks and how they have been minimized, if not resolved. (2) The emphasis on accountability and independence, which requires that each AI system be tested for risks and that the data scientists, lawyers, etc. evaluating the AI have different incentives than those of the frontline data scientists. (3) The need for continuous review of AI systems, even after impact assessments and independent reviews have occurred.
Why it matters: Standards for regulating AI are becoming increasingly clear, so organizations must be proactive to ensure their existing AI remains compliant. These three commonalities that have been identified amongst the regulations empower companies to take concrete actions right now in an effort to ensure that their systems do not go against any existing and future laws.
Between the lines: The last trend is perhaps the most fascinating, because it draws attention to the fact that “risk management is a continual process.” This is particularly important in the context of AI risks, because they change over time. In practice, this trend will likely present itself as regularly performed audits and reviews, but it will be interesting to observe who will be responsible for these reviews and what kind of timelines will be implemented.
It will soon be easy for self-driving cars to hide in plain sight. We shouldn’t let them.
What happened: Should autonomous cars be labeled? In other words, should it be clear to other road users that a vehicle is driving itself? In the largest survey of citizens’ attitudes to self-driving vehicles, which was conducted as part of the Driverless Futures project at University College London, 87% of the 4,800 UK citizens answered ‘yes.’ However, a smaller sample size of experts in this field were less convinced, with only 44% agreeing with the statement above.
Why it matters: The arguments for and against this topic revolve around the various implications of labeling. For example, one of the points made against labeling highlights the ways in which it could affect data collection. “If a self-driving car is learning to drive and others know this and behave differently, this could taint the data it gathers.” On the other hand, most individuals would agree with the idea that they have a right to know when they are interacting with a self-driving vehicle.
Between the lines: Up until this point, the self-driving car companies have been able to decide how to advertise themselves, which has resulted in a lack of standardization, while negatively affecting public trust. It should be noted that this self-driving vehicle debate points to the larger issue of regulation. Oftentimes, “the developers of emerging technologies, who often portray them as disruptive and world-changing at first, are apt to paint them as merely incremental and unproblematic once regulators come knocking.” An honest assessment of the implications of novel technologies will be needed in order to determine the future regulatory landscape.
📖 From our Living Dictionary:
👇 Learn more about why it matters in AI Ethics via our Living Dictionary.
🌐 From elsewhere on the web:
The Montreal AI Ethics Institute becomes a Partner at the Partnership on AI!
We’re proud to announce that we’ve joined Partnership on AI! Machine learning and artificial intelligence has transformed nearly every industry, so there’s never been a more important time to develop guidelines for this technology.
By joining PAI’s community of 100+ organizations from academia, civil society, media, and industry, we’re eager to address the most pressing questions regarding the future of AI.
Could ethical AI help underrepresented groups get ahead at work?
We’re proud to see our faculty director, Dr. Marianna Ganapini, featured in this The Globe and Mail article.
But Dr. Ganapini notes that it’s possible to counteract this potential harm by leading with a “value sensitive design strategy,” which lays out “actionable steps for designing technology in accordance with our values.” A value-sensitive design process lets stakeholders figure out what values they want to embed in a system, design that AI system in accordance with those values, and then find ways to check whether the system behaves ethically, Dr. Ganapini says.
💡 In case you missed it:
Avoiding an Oppressive Future of Machine Learning: A Design Theory for Emancipatory Assistants
Broad adoption of machine learning systems could usher in an era of ubiquitous data collection and behaviour control. However, this is only one potential path for the technology, argue Gerald C. Kane et al. Drawing on emancipatory pedagogy, this paper presents design principles for a new type of machine learning system that acts on behalf of individuals within an oppressive environment.
To delve deeper, read the full article here.
Take Action:
As some of you already know, I have taken on a new role as the Senior Responsible AI Leader & Expert at the Boston Consulting Group (BCG)! I will still continue my role at the Montreal AI Ethics Institute and I look forward to engaging with you all! You can learn more about what motivated my Responsible AI journey and what I’ll be doing at BCG.