Discover more from The AI Ethics Brief
AI Ethics #29: Warning signs, fairness definitions, considerations for closed messaging research, AI & labour in the Global South, and chatbots
Did you know that Facebook charged Biden a higher price than Trump for campaign ads?
Welcome to the 29th edition of our weekly newsletter that will help you navigate the fast-changing world of AI Ethics! Every week we summarize the best of AI Ethics research and reporting, along with some commentary. More about us at montrealethics.ai/about.
Photo by janilson furtado on Unsplash
This week’s overview
Research summaries:
Warning Signs: The Future of Privacy and Security in an Age of Machine Learning
Considerations for Closed Messaging Research in Democratic Contexts
Fairness Definitions Explained
Automating Informality: On AI and Labour in the Global South
Article summaries:
Split-Second ‘Phantom’ Images Can Fool Tesla’s Autopilot (Wired)
How to make a chatbot that isn’t racist or sexist (MIT Tech Review)
Attention EU regulators: we need more than AI “ethics” to keep us safe (Access Now)
How the Racism Baked Into Technology Hurts Teens
Facebook Charged Biden a Higher Price Than Trump for Campaign Ads (The Markup)
Ethics in tech: are regular employees responsible? (Welcome to the Jungle)
Featured work from our staff:
In 2020, Nobody Knows You’re a Chatbot
Featured guest post this week:
When Algorithms Infer Pregnancy or Other Sensitive Information About People
But first, our AI Ethics Concept of the Week: ‘Unsupervised Learning’
Unsupervised learning is a machine learning technique that teaches AI to recognize patterns in unlabeled datasets. It can be incredibly useful but, given its black box nature, it can also be quite problematic.
Learn about the relevance of unsupervised learning to AI ethics and more in our AI Ethics Living dictionary.
Research summaries:
Warning Signs: The Future of Privacy and Security in an Age of Machine Learning by Sophie Stalla-Bourdillon, Brenda Leong, Patrick Hall, Andrew Burt
Machine learning (ML) is proving one of the most novel discoveries of our time, and it’s precisely this novelty that raises most of the issues we see within the field. This white paper serves to demonstrate how although not for certain, the warning signs appearing in the ML arena point toward a potentially problematic privacy and data security future. The lack of established practices and universal standards means that solutions to the warning signs presented are not as straightforward as with traditional security systems. Nonetheless, the paper demonstrates that there are solutions available, but only if the road of inter-disciplinary communication and a proactive mindset is followed.
To delve deeper, read our full summary here.
Fairness Definitions Explained by Sahil Verma, Julia Rubin
This paper explains how because of different definitions of fairness, we can have scenarios that are fair according to one and not fair according to another.
To delve deeper, read our full summary here.
Automating Informality: On AI and Labour in the Global South by Noopur Raval
This paper lays out the implications of the prevalent informal labor markets in India and the associated social hierarchies that impose a double precarity on the lives of the workers because of marginalization through both digital and reinforced societal inequities.
To delve deeper, read our full summary here.
Research summary from our learning community:
Considerations for Closed Messaging Research in Democratic Contexts by Connie Moon Sehat, Aleksei Kaminski
Closed messaging apps such as WhatsApp, Facebook Messenger, and WeChat have grown in use in recent years and can act as political means to spread information. In studying issues around election-related communications, researchers face ethical conundrums due to the encrypted privacy nature of group chats. Sehat and Kaminski present a review of four models used by researchers: voluntary contribution, focused partnerships, entrance with identification, and entrance without identification. They conclude by posing and analyzing the complexities of six ethical questions the researchers consider either implicitly or explicitly prior to collection and analysis of closed messaging research data. These questions touch upon issues of public vs. private chats, data ownership, informed consent, insight sharing, and conflict of interest.
To delve deeper, read the full summary here.
What we’re thinking:
In 2020, Nobody Knows You’re a Chatbot by Connor Wright
This is a joint work between the Montreal AI Ethics Institute and Fairly.AI where Connor is working as a research intern
The classic 90’s guideline of “On the internet, nobody knows you’re a dog” has evolved. Instead of nobody knowing you’re a different kind of being, the tagline now refers to whether you’re interacting with someone who actually exists or not. The rise of chatbots has pulled into question what the very core of conversation ought to be, as well as ethical issues alongside it. This piece explores what a chatbot is, who has taken this issue seriously, and whether chatbots can actually be a force for good.
To delve deeper, read the full piece here.
Article summaries:
Split-Second ‘Phantom’ Images Can Fool Tesla’s Autopilot (Wired)
ML Security is a burgeoning field that is slowly gaining traction with practitioners in the security and the ML domains. This article points out how semi-autonomous vehicles (AV) can be compromised by flashing phantom images, something that a human driver would potentially ignore, to trigger crashes, halts, and unwanted behaviour from the self-driving system. One of the ways the researchers do this is by injecting a few frames of a road sign into the video of a billboard that can confuse the camera system on an AV. Testing it on the Tesla and Mobileye systems, they were able to elicit unwanted behaviour like triggering a communication of the incorrect speed and halting when flashed with these phantom images. These images are not persistent and appearing for as little as 0.42 seconds befuddled the Tesla system and 0.125 seconds for the Mobileye system.
To push their research even further, they attempted to find the least noticeable areas within a video to inject these phantom images to evade detection from the human driver. While there have been previous demonstrations of attacks in the form of placing cheap stickers on the road and confusing the lane-following systems in the AV, such attacks leave behind forensic evidence. The novelty in this attack is that it can be executed remotely and leave behind little evidence. Additionally, such attacks don’t require as much special expertise compared to some of the other previously executed attacks.
Other researchers and operators at companies like Cruise have offered that the vehicles with higher degrees of autonomy rely on multiple sensors to make decisions, including LIDAR which is unaffected by such an attack. There is also an argument made by AV manufacturers that the present version of the technology isn’t meant to be used without human supervision but that is not what happens in practice. Hence, such attacks are still valuable and demonstrate the brittle nature of the existing systems, driver and pedestrian safety is paramount before we can have AVs roaming the world freely.
How to make a chatbot that isn’t racist or sexist (MIT Tech Review)
GPT-3 has shown tremendous capabilities, to the point that people think that it exhibits human-like intelligence which can be disorienting to understand when it comes to its limitations. But, for regular readers of this newsletter, you will not be surprised to learn that when you utilize such large-scale models (GPT-3 is the largest language model ever built), it comes with “internet-scale” biases, as the creators of the system like to say.
This article points out some of the problematic outputs from the system which we warn readers to take a look at since we don’t want to reproduce that vile here. Given that we are solutions-oriented, the mere identification of the problem is just the first step in solving the problem. The article mentions a safe conversational AI workshop that was held recently that made some recommendations on how we can address these problems.
Specifically, they point to the bolting on of filters that can bleep out the content that is harmful (though identification of that content is in itself a challenge). Another approach is to avoid utilizing data from contentious domains like politics and religion. But, this has the risk of throwing out good training data with bad training data.
Ultimately, legislative approaches combined with technical interventions will be the ideal combination of strategies that can help mitigate the harmful effects of machine learning models gone awry. Most of all, sandboxing and testing the systems prior to release is going to be essential, a recommendation made by the participants of the workshop that tried to “adversarially” get the model to spew out harmful content, something that trolls will inflict on the system when it is released into the wild.
Attention EU regulators: we need more than AI “ethics” to keep us safe (Access Now)
This article calls to attention the recent position put forward by Denmark (along with 14 nations) that request the EU to not place too onerous requirements on AI systems from an ethical standpoint lest it stifles innovation. The reasoning behind that is that too many systems might be categorized as “high-risk” and hence limit the potential deployment of innovative technologies prematurely. In effect, they ask for a more objective evaluation methodology that would prevent such transgressions.
But, such an approach strips even more power from those who are already marginalized by disenfranchising them from the ability to question the deployment of a system that is obscured behind these assessments. Additionally, ignoring the human rights implications of these systems means that, say your city uses live facial recognition, you don’t even have the ability anymore to protest in a safe manner without potentially facing repercussions. If the risk assessments themselves are not publicly examinable, then you further entrench the power asymmetries between those that build the system and on whom it is used.
The report mistakenly assumes that AI ethics principles will be enough, which as we’ve covered in the past are certainly not, including the importance of virtues in this discussion. By not drawing any red lines on cases where there is consensus on more misuse than beneficial uses, the report misses an important opportunity for steering the conversation in the right direction.
While it doesn’t say that there aren’t any cases where red lines will not be considered, it lauds the potential benefits over the actual harms taking place right now. It also advocates for purely technical fixes as a way to address the problems. Soft law, self-regulation, and principles are inadequate, we need to have stronger regulations and consultations with those who are affected if we’re to do better.
How the Racism Baked Into Technology Hurts Teens (The Atlantic)
For the most part, when we talk about intersectionality in the bias discussion, we focus on the ideas of ethnic origin, geography, gender, etc. but there are fewer discussions on the intersection of race and age. This article makes that distinction and backs it up with some pretty solid arguments as to why this is an important issue.
Sustained and pervasive exposure to negative stereotypes has a noticeable influence on how we perceive ourselves. In an age where we spend a significant amount of time online, it is not surprising that racism online will play a big role in our psychological development. Especially children who are young and whose minds are still malleable, they are quite susceptible to the racism that they have to endure online. The effects are manyfold: negative impacts on sleep, self-respect, academic performance, relationships, and more.
Using the phrase technology microaggressions, the author illustrates that the frequency with which they occur online is much higher than that experienced by people in-person. Some of the protection measures offline like parents being able to intervene are much more limited online partially because of a lack of understanding on the parents’ part in terms of how the platform algorithms work. There is no “stranger danger” with recommendation systems. Ultimately, the trauma that teens will experience in their adolescent years is going to have lasting impacts on them and hence should be addressed proactively.
Facebook Charged Biden a Higher Price Than Trump for Campaign Ads (The Markup)
No matter which side of the political spectrum you fall on, when it comes to preserving the integrity of our democratic institutions and upholding legislations around campaigning, we can’t let third-party corporations become arbiters of what flies and what doesn’t.
In this excellent piece of investigative journalism by The Markup, they uncovered that there was a discrepancy between how much Biden and Trump were charged for each of their political ads that they placed on the platform. Not only that, but it also found that this difference was much more pronounced in the swing states where votes matter much more than the rest of the country. Such discrepancies have cost the Biden campaign millions of dollars more to try to achieve the same impact in their social media strategy in terms of the number of impressions compared to Trump.
Facebook ads work through an auction method where costs are determined dynamically and it is hard to ascertain all the factors that go into determining that price (as many researchers have lamented from a transparency perspective). But, according to campaign finance regulations, advertisers are not allowed to charge different amounts to different candidates, for example for TV and radio ads. But, Facebook apparently is able to evade this through the loophole that the pricing is not set by them in particular and they are determined algorithmically due to market forces taking into account factors like the reach and engagement that the content will generate and the relevance of the content.
While this may be a technicality, it creates severe concerns when it comes to how candidates are able to reach their constituents in the last mile of the election cycle. Alas, surfacing the problem is a start and we hope that more transparency around this and clarity in how campaign finance regulations are applied will help to improve the situation over time.
Ethics in tech: are regular employees responsible? (Welcome to the Jungle)
There is a strong preference expressed by employees to not only understand the societal impacts of the systems that they are building but also to have meaningful control over the decision-making process. But, the first step to being able to do that is to have open discussions within the organization and not facing unnecessary censure for expressing dissenting opinions.
If critical feedback about products and services is actively discouraged, employees will over time “learn” a sense of helplessness; the employees in especially high-demand positions like machine learning do have considerable market power and should use that actively to voice their concerns. From an employee review perspective, including efforts made in bringing a more ethical lens to the development process should be rewarded rather than being perceived as an “extracurricular” activity. It must also be included in the formal compensation for the employees.
Having positions that are designated for disseminating ethics-related information across the organization and forming bridges between different efforts within the organization will also be crucial to the success of the adoption of ethical principles.
From elsewhere on the web:
Where in the World is AI? (Map created by AI Global)
Everyone is talking about AI, but how and where is it actually being used? AI Global has mapped out interesting examples where AI has been harmful and where it has been helpful. Cases are aggregated by AI Global, Awful AI, and Charlie Pownall/CPC & Associates. They collected over 300 cases on responsible and unethical AI cases which might help guide research and trust discussions.
To delve deeper, go to map.ai-global.org.
Superheroes of Deep Learning Vol 1: Machine Learning Yearning (Approximately Correct)
Need a break from our report? Here's a lighter read — a comic co-created by our artist-in-residence Falaah Arif Khan and Professor Zachary Lipton of Carnegie Mellon. How much data does it take to rescue a cat from a tree?
To delve deeper, read the full comic.
AI (Artificial Intelligence) Governance: How To Get It Right (Forbes)
Our researcher Erick Galinkin (also principal AI researcher at Rapid7) was quoted: “Accountability—that is, having an individual who is responsible for the decisions made by the algorithm—is a principle championed by organizations like Rapid7, Microsoft, the Partnership on AI, and the Montreal AI Ethics Institute, but does not have the same purchase with all businesses and governments who leverage AI.”
To delve deeper, read the full article.
A radical new technique lets AI learn with practically no data (MIT Tech Review)
Our researcher Ryan Khurana was quoted: “Most significantly, ‘less than one’-shot learning would radically reduce data requirements for getting a functioning model built.” This could make AI more accessible to companies and industries that have thus far been hampered by the field’s data requirements. It could also improve data privacy because less information would have to be extracted from individuals to train useful models.
To delve deeper, read the full article.
In case you missed it:
The State of AI Ethics Report (Oct 2020)
Here's our 158-page report on The State of AI Ethics (October 2020), distilling the most important research & reporting in AI Ethics since our June report. This time, we've included exclusive content written by world-class AI Ethics experts from organizations including the United Nations, AI Now Institute, MIT, Partnership on AI, Accenture, and CIFAR.
To delve deeper, read the full report here.
Guest post:
When Algorithms Infer Pregnancy or Other Sensitive Information About People by Eric Siegel, PhD
Machine learning can ascertain a lot about you — including some of your most sensitive information. For instance, it can predict your sexual orientation, whether you’re pregnant, whether you’ll quit your job, and whether you’re likely to die soon. Researchers can predict race based on Facebook likes, and officials in China use facial recognition to identify and track the Uighurs, a minority ethnic group.
Now, do the machines actually “know” these things about you, or are they only making informed guesses? And, if they’re making an inference about you, just the same as any human you know might do, is there really anything wrong with them being so astute?
To delve deeper, read the full piece here.
If you’ve got an informed opinion on the impact of AI on society, consider writing a guest post for our community — just send your pitch to support@montrealethics.ai. You can pitch us an idea before you write, or a completed draft.
Take Action:
Call for help: survey on responsible tech
Ethical Intelligence is conducting research on the current steps being taken by SMEs and entrepreneurs with regard to responsible technology. This is part of a larger endeavour to understand the importance of ethics in the wider tech ecosystem. As a thank you for taking the survey, you will receive access to EI’s Crash Course on Trust and Transparency.
MAIEI Learning Community
Interested in discussing the ethical challenges of AI in addressing some of the biggest ethical challenges of AI to develop interdisciplinary solutions with thinkers from across the world?
Our AI Ethics consulting services
In today’s market, the make-or-break feature for organizations using AI is whether they embody the principles of morality and ethics.
We want to help you analyze your organization and/or product offerings for moral and ethical weaknesses and make sure your business model is airtight. By undergoing a thorough, multidisciplinary review of your AI tool, we will provide ethical, social, and technical feedback on your work and research, which will allow you to proactively address your own blindspots and maximize your potential before ever undergoing a third-party ethics review.
Events:
As a part of our public competence building efforts, we host events frequently spanning different subjects as it relates to building responsible AI systems, you can see a complete list here: montrealethics.ai/meetup
Details for our next event coming soon!
Signing off for this week, we look forward to it again in a week! If you enjoyed this and know someone else that can benefit from this newsletter, please share it with them!
If someone has forwarded this to you and you want to get one delivered to you every week, you can subscribe to receive this newsletter by clicking below:
If you have feedback for this newsletter or think there’s something interesting we missed, email us at support@montrealethics.ai