AI Ethics #38: Prudent public-sector procurement of AI, backdoors in AI systems, ethics of emotion AI, and more ...
Did you know that you could use adversarial machine learning to design more robust computer vision systems?
Welcome to another edition of the Montreal AI Ethics Institute’s weekly newsletter that will help you navigate the fast-changing world of AI Ethics! Every week we summarize the best of AI Ethics research and reporting, along with some commentary. More about us at montrealethics.ai/about.
Consider supporting our work through Substack
We’re trying a Guardian/Wikipedia-style tipping model: everyone will continue to get the same content but those who can pay for a subscription will support free access (to our newsletter & events) for everyone else. Each contribution, however big or small, helps ensure we can continue to produce quality research, content, and events.
NOTE: When you hit the subscribe button below, you will end up on the Substack page where if you’re not already signed into Substack, you’ll be asked to enter your email address again. Please do so and you’ll be directed to the page where you can purchase the subscription.
Support our work today with as little as $5/month
Photo by Stefan Steinbauer on Unsplash
This week’s overview:
What we are thinking:
Prudent Public-Sector Procurement of AI Products
Research summaries:
The Robot Made Me Do It: Human–Robot Interaction and Risk-Taking Behavior
The Ethics of Emotion in AI Systems
“I Don’t Want Someone to Watch Me While I’m Working”: Gendered Views of Facial Recognition Technology in Workplace Surveillance
Article summaries:
If not AI ethicists like Timnit Gebru, who will hold Big Tech accountable? (Brookings)
Twitter joins Facebook and YouTube in banning Covid-19 vaccine misinformation (Vox)
Unadversarial examples: Designing objects for robust vision (Microsoft Research)
In AI ethics, “bad” isn’t good enough (Amanda Askell’s blog)
AI research survey finds machine learning needs a culture change (VentureBeat)
Triggerless backdoors: The hidden threat of deep learning (TechTalks)
But first, our call-to-action of the week:
As the year kicks off, we hope that you have the opportunity to reset and prioritize the most important goals for yourself in 2021. We are certainly doing the same at the Montreal AI Ethics Institute and greatly appreciate your support in engaging with our work. Please share this newsletter with others in your network so that we can come together and build a more ethical, safe, and inclusive future for AI.
What we’re thinking:
Prudent Public-Sector Procurement of AI Products by Abhishek Gupta
Lots of things can go wrong when one is trying to procure an AI system, especially in the cases where we have the public sector looking to make purchases, we have to be extra careful in how we go about doing that since it has the potential to affect a lot of people. Our founder (Abhishek) co-wrote this op-ed with our researcher Muriam, discussing what they think are the essential things to consider when engaging in a procurement process. They included a few steps that serve as a minimal framework to get people started on making smarter decisions.
To delve deeper, read the full article here.
Research summaries:
The Robot Made Me Do It: Human–Robot Interaction and Risk-Taking Behavior by Yaniv Hanoch, Francesco Arvizzigno, Daniel Hernandez García, Sue Denham, Tony Belpaeme, and Michaela Gummerum
Can robots impact human risk-taking behavior? In this study, the authors use the balloon analogue risk task (BART) to measure risk-taking behavior among participants when they were 1) alone, 2) among a silent robot, and 3) among a robot that encouraged risky behavior. The results show that risk-taking behavior did increase among participants when encouraged by the robot.
To delve deeper, read the full summary here.
The Ethics of Emotion in AI Systems by Luke Stark, Jesse Hoey
In this summary paper, Luke Stark and Jesse Hoey, by drawing on the dominant theories of emotion in philosophy and psychology, provide an overview of the current emotion models and the range of proxy variables used to design AI-powered emotion recognition technology. The disjuncture between the complexity of human emotion and the limits of technical computation raises serious social, political, and ethical considerations that merit further discussion in AI ethics.
To delve deeper, read the full summary here.
“I Don’t Want Someone to Watch Me While I’m Working”: Gendered Views of Facial Recognition Technology in Workplace Surveillance by Luke Stark, Amanda Stanhaus, Denise L. Anthony
By drawing on data from a Pew Research Center national survey, Stark et al. conduct a statistical and qualitative analysis of gender-based responses in the use of facial recognition technologies in workplace monitoring. The authors further discuss the increasing popularity of digital surveillance tools and their implications for minorities and vulnerable groups’ privacy concerns.
To delver deeper, read the full summary here.
Article summaries:
If not AI ethicists like Timnit Gebru, who will hold Big Tech accountable? (Brookings)
Without a doubt Dr. Gebru’s work has been a cornerstone of the AI ethics research and advocacy community; the work at MAIEI is also deeply inspired by her persistent call for awareness and action when it comes to injustices in AI ethics. This article highlights one of the key areas of concern when it comes to AI ethics: the lack of accountability in Big Tech when it comes to work that might run counter to their business interests even when the very reason for having AI ethicists on the team is to help bring that accountability to the work that is being done by the organization. As mentioned in the article, because of the way AI systems are built and structured, there might not be a way for external researchers to do much except probe externally to tease out potential problems. AI ethicists working at the company serve as a much more robust check (if they are allowed to function!) against potential ill-uses of the technology. But, if they are not allowed to function freely, that removes one of the final guardrails that we have, wreaking havoc on unsuspecting users of the various technologies and platforms offered by Big Tech.
Very rarely do we see action taken by organizations who for the most part dispute external findings. Even mass-scale action like #StopHateForProfit and #DeleteUber put only a minor dent financially speaking on the bottom lines of the organizations thus acting as a limited check. In fact, in the month where about 1000 companies signed up to limit their advertising spend on Facebook, the company still reported a profit showcasing the highly essential nature of the services provided by these organizations that are hard to disentangle from our daily existence and operations.
While there is tremendous value in the work done by journalists to bring to the forefront some of the problems and galvanize public action, without government action, Big Tech will continue to run rampant and ignore the importance of the work by scholars like Dr. Gebru who have worked tirelessly to ensure that we build fair technologies that benefit everyone instead of a narrow set of people.
Twitter joins Facebook and YouTube in banning Covid-19 vaccine misinformation (Vox)
2020 certainly was a year filled with tremendous opportunities for misinformation to thrive. With many global catastrophes taking place simultaneously, it might seem like a huge problem that has no clear solution. Yet, inaction isn’t a strategy and Twitter finally joined the fray with other companies in their commitment to curb COVID-19 related misinformation on their platform. The hesitation on the part of the platform had been that they were still trying to figure out what the right strategy was in addressing the misinformation.
Their tactics include labeling disputed information (something that suffers from the “implied truth” effect problem which we covered in a previous newsletter) and outright removal of egregious content. The application of this two-pronged strategy will be an experiment — we don’t know yet how successful this is going to be.
Combating misinformation will be an important consideration when dealing with anti-vaxxers during a pandemic. The platforms have a responsibility to ensure that they don’t become instruments delaying the vaccine deployment efforts worldwide. Adopting a strategy that isn’t just based on content, and one that looks at topological qualities of the network might also be an approach that can help accelerate combat efforts by the platforms. In fact, there’s some research being done by network scientists who are trying to find non-content based methods to combat misinformation on the platform.
Unadversarial examples: Designing objects for robust vision (Microsoft Research)
(Full disclosure: Our founder Abhishek Gupta works at Microsoft. However, the inclusion of this article in the newsletter is unrelated to his employment and not paid for or endorsed by Microsoft)
In this newsletter, we have stressed the importance of machine learning security as a field that requires us to pay deeper attention, especially from the perspective of building reliable systems that operate in uncertain but critical conditions. This research work flips the notion of adversarial examples on its head and talks about how objects might be designed in a manner that permits them to be robust to perturbation that can trigger misclassification.
Good design in general makes it easy for the intended audience to easily obtain the information that is required for them. Their approach allows one to make the objects more recognizable to the machines from a computer vision perspective. Think of how certain elements in nature are marked in a bright color to make them more recognizable. For example, brightly colored frogs indicate to predators that they are poisonous and should be approached with care. The motivation for making objects more recognizable is the increasing prevalence of computer vision systems in managing object detection and coordinating the activities of autonomous objects in our environment. Think about the case of a drone that is out of sight and needs to land on a pad in a place that has dusty or foggy conditions. It is not uncommon to lament that computer vision systems today aren’t robust enough to operate independently in these conditions which partially restricts widespread use of these systems.
In experiments run by the researchers, they were able to apply “unadversarial” patches that made the landing pads much more obvious to the drone computer vision system making landings much more reliable. They also applied patterns and textures on cars and airplanes in a simulator that made them much more recognizable, reducing the rates of errors. Ultimately, this work has the potential for us to get much more reliable systems in practice. And reliability will be critical in the acceptance and trust from the user standpoint in the use of these systems where humans and machines co-exist.
In AI ethics, “bad” isn’t good enough (Amanda Askell’s blog)
This article does a fantastic job at reorienting some of the conversations in our field to a frame that ultimately will be much more productive compared to what the conversation has been so far. Specifically, it talks about the term pro tanto from the field of ethics meaning “to that extent”. This is used to frame the conversation in AI ethics along the lines of evaluating what the alternatives are, what the impacts of those are going to be, and what will happen if we continue to maintain the status quo. I particularly like this approach because it moves away problem identification to solution generation.
A lot of conversations in the field stress how AI systems are causing harms, and rightly so; we need that level of scrutiny to surface latent harms that are levied on people who don’t have power to fight against these injustices. On the other hand, in some cases, compared to an outright ban, one must take a more middling approach that can evaluate how bad the status quo has been, what the alternate deployments of the system might be, and how they can be improved rather than just digging one’s heels without movement on how we can build better. The author motivates this with an example from medicine where stitches cause pain in the short-run but perhaps undergoing surgery for something that will fix the larger issue is required and hence it might be an acceptable harm to arrive at a better outcome overall. A recognition of these pro tanto harms is thus essential and exploring the resources that might help mitigate them in alignment with the needs of the subjects is a better approach than just calling out that the system is bad.
There are many moral theories like deontology, utilitarianism, consequentialism, etc. that may paralyze action but taking such an approach might be useful in unearthing information that will help to improve the design of future systems and move the entire ecosystem in a direction that is more pro-solution generation.
AI research survey finds machine learning needs a culture change (VentureBeat)
One of the key ideas that really resonated with me in this article is that of urging people to reflect on the importance of considering that every person represented in large datasets is someone with a deep and enriching life and that their data should be treated with care. This is one of the problems that occurs when we work with numbers which tend to flatten out our understanding of and empathy towards people.
We need to invest a lot more care into how we curate and create large datasets. Especially when they have to do with critical parts of our lives. Arguably, with the large degree of interconnectedness, perhaps all datasets would fall into that because there are so many ways in which they can interact with each other that they might still end up playing an important role in determining something significant about someone even when it is not significant on its own. While such approaches will require more investment in terms of efforts and resources, the payoffs in terms of better outcomes for those who are represented in that data is going to be quite worth it.
The considerations shouldn’t just look at direct impacts on the people involved but also indirect and follow-on effects, say the impact on the environment that arises from the use of large-scale models and datasets. This also includes the exclusionary effect that it has on siloing out those who don’t have the compute and infrastructural resources to be able to shape the technical and policy measures as they relate to such large models.
Triggerless backdoors: The hidden threat of deep learning (TechTalks)
We’ve been advocates for the importance of adversarial machine learning for a long time now and stress its importance as a critical factor in the success of other ethical AI initiatives. To that effect, this article talks about a recent paper that looks at a new class of adversarial attacks that don’t require explicit external triggers to get the model to behave in a way that the adversary wants.
Most of the current backdoor attacks rely on the fact that the adversary taints the training dataset in a way so that the model associates a certain type of example with particular target labels. The model behaves as normal for most cases, but when it comes across examples that were tainted, it gets triggered and behaves in a way that is deviant and serves the needs of the adversary.
While the classic backdoor attacks rely more so on “visible” interruptions in the data to trigger these sorts of behaviours, some argue that they might be more detectable by humans and are also more difficult to mount in practice in a physical context. This attack on the other hand relies on manipulating the dropout layers in the neural network and hence bakes in the deviant behaviour into the model architecture rather than relying solely on the data. But, this has some caveats like having even stronger adversary capabilities assumptions, probabilistic triggering of the deviant behaviour, and accidental triggering, though the paper does propose some guardrails against all of these.
While the article concludes that the attacks are a lot less feasible in practice, this approach definitely presents a new and interesting direction for research, which should lead to more robust AI systems in the long run.
From elsewhere on the web:
How A.I. can speed up the COVID-19 vaccination drive (Fortune)
Our staff researcher Ryan Khurana was quoted in this piece on Fortune: “In their latest paper, the team has extended this principle to show that, theoretically, LO-shot techniques allow AIs to potentially learn to distinguish thousands of objects given a small data set of even two examples. This is a great improvement on traditional deep-learning systems, in which the demand for data grows exponentially with the need to distinguish more objects.”
Ethics Experts Gives 2021 Predictions (RE-WORK)
Our founder Abhishek Gupta details his 7 predictions for what we can look forward to from AI Ethics in 2021.
In case you missed it:
Bridging the Gap Between AI and the Public (TEDxYouth@GandyStreet) by Connor Wright
With the theme of “bridging the gap”, I decided to base my TEDx Youth talk on bridging the gap between the public and the AI debate. Given that AI is often thought of as something reserved for killer robots, I wanted to show how AI in its current format could potentially achieve far worse than a killer robot ever could. To do this, I presented AI in the form of algorithms being applied to different aspects of human life, before launching into my argument. Here, after I compared the current AI situation to a novel in progress, I mentioned two negative consequences of the public not getting involved: a lack of diversity in the AI building process and a lack of pushback. I’ll now walk you through how this took shape.
To delve deeper, read the full summary here.
Guest post:
If you’ve got an informed opinion on the impact of AI on society, consider writing a guest post for our community — just send your pitch to support@montrealethics.ai. You can pitch us an idea before you write, or a completed draft.
Take Action:
Events
As a part of our public competence building efforts, we host events frequently spanning different subjects as it relates to building responsible AI systems. We also share events from the broader AI ethics ecosystem.
If you know of some events that you think we should feature, let us know at support@montrealethics.ai
MAIEI Learning Community
Interested in discussing the ethical challenges of AI in addressing some of the biggest ethical challenges of AI to develop interdisciplinary solutions with thinkers from across the world?
Our AI Ethics consulting services
In today’s market, the make-or-break feature for organizations using AI is whether they embody the principles of morality and ethics.
We want to help you analyze your organization and/or product offerings for moral and ethical weaknesses and make sure your business model is airtight. By undergoing a thorough, multidisciplinary review of your AI tool, we will provide ethical, social, and technical feedback on your work and research, which will allow you to proactively address your own blindspots and maximize your potential before ever undergoing a third-party ethics review.
Consider supporting our work through Substack
We’re trying a Guardian/Wikipedia-style tipping model: everyone will continue to get the same content but those who can pay for a subscription will support free access (to our newsletter & events) for everyone else. Each contribution, however big or small, helps ensure we can continue to produce quality research, content, and events.
NOTE: When you hit the subscribe button below, you will end up on the Substack page where if you’re not already signed into Substack, you’ll be asked to enter your email address again. Please do so and you’ll be directed to the page where you can purchase the subscription.
Support our work today with as little as $5/month
Signing off for this week, we look forward to it again in a week! If you enjoyed this and know someone else that can benefit from this newsletter, please share it with them!
If someone has forwarded this to you and you want to get one delivered to you every week, you can subscribe to receive this newsletter by clicking below:
If you have feedback for this newsletter or think there’s something interesting we missed, email us at support@montrealethics.ai