

Discover more from The AI Ethics Brief
AI Ethics #33: The compute divide, secret revealer, UX for AI, algorithmic fairness and domain generalization
How comfortable would you be in entrusting care of your family members to robots?
Welcome to another edition of the Montreal AI Ethics Institute’s weekly newsletter that will help you navigate the fast-changing world of AI Ethics! Every week we summarize the best of AI Ethics research and reporting, along with some commentary. More about us at montrealethics.ai/about.
Photo by Kristina Flour on Unsplash
This week’s overview:
Research summaries:
“Cool Projects” or “Expanding the Efficiency of the Murderous American War Machine?”
The Secret Revealer: Generative Model-Inversion Attacks Against Deep Neural Networks
Exchanging Lessons Between Algorithmic Fairness and Domain Generalization
Article summaries:
You can get a robot to keep your lonely grandparents company. Should you? (Vox)
Why Getting Paid for Your Data Is a Bad Deal (Electronic Frontier Foundation)
AI research finds a ‘compute divide’ concentrates power and accelerates inequality in the era of deep learning (VentureBeat)
The UK Government Isn't Being Transparent About Its Palantir Contracts (Vice)
I Paid an AI to Get Me Instagram Followers. It Didn't Go Well (Vice)
UX for AI: Trust as a Design Challenge (SAP Design)
But first, our call-to-action of the week:
Consider supporting our work through Substack
We’re trying a Guardian/Wikipedia-style tipping model: everyone will continue to get the same content but those who can pay for a subscription will support free access (to our newsletter & events) for everyone else. Each contribution, however big or small, helps ensure we can continue to produce quality research, content, and events.
NOTE: When you hit the subscribe button below, you will end up on the Substack page where if you’re not already signed into Substack, you’ll be asked to enter your email address again. Please do so and you’ll be directed to the page where you can purchase the subscription.
Support our work today with as little as $5/month
Research summaries:
“Cool Projects” or “Expanding the Efficiency of the Murderous American War Machine?” by Catherine Aiken, Rebecca Kagan, Michael Page
This research study seeks to glean whether there is indeed an adversarial dynamic between the tech industry and the Department of Defense (DoD) and other US government agencies. It finds that there is wide variability in perception that the tech industry has of the DoD, and willingness to work depends on the area of work and prior exposure to funding from and work of the DoD.
To delve deeper, read our full summary here.
The Secret Revealer: Generative Model-Inversion Attacks Against Deep Neural Networks by Yuheng Zhang, Ruoxi Jia, Hengzhi Pei, Wenxiao Wang, Dawn Song
Neural networks have shown amazing ability to learn on a variety of tasks, and this sometimes leads to unintended memorization. This paper explores how generative adversarial networks may be used to recover some of these memorized examples.
To delve deeper, read our full summary here.
Exchanging Lessons Between Algorithmic Fairness and Domain Generalization by Elliot Creager, Jörn-Henrik Jacobsen, Richard Zemel
This paper unifies two seemingly disparate research directions in Machine Learning (ML), namely Domain Generalization and Fair Machine Learning, under one common goal of “learning algorithms robust to changes across domains or population groups”. It draws links between several popular methods in Domain Generalization and Fair-ML literature and forges a new exciting research area at the intersection of the two.
To delve deeper, read our full summary here.
Article summaries:
You can get a robot to keep your lonely grandparents company. Should you? (Vox)
Social robots have been around for many years, albeit limited in their role and capabilities in the past, with the advent of more powerful AI-enabled robots and the pandemic related restrictions, we are seeing an increase in the deployment of robots in social contexts where human contact would have been the norm. For example, you have the case of the use of robots to take care of the elderly. The article dives deep into understanding the benefits and drawbacks of offloading care responsibilities for the elderly to robots.
The gamut of companion robots runs from amiable pets like cats and dogs to more humanoid forms, some more functional than others, for example, ones that are designed to do heavy lifting or aid in other physical tasks. A particularly popular robot called Paro (that has the form of a baby seal) has been shown to reduce loneliness, depression, amongst other factors in people suffering from dementia. An obvious benefit of these robots is that they are tireless and don’t get flustered when dealing with human patients. They also won’t defraud or abuse the people that they are taking care of, something that happens unfortunately with human caretakers for the elderly.
An argument against the use of such robots is that they can’t replace genuine human contact and might hence be a poor substitute for actual human contact. They also create a false sense of comfort in those who can now step away from having to care for the elderly. On the flip side, for some really young people, the robots can help them to improve their interactions with the elderly by providing a fun and common point of conversation.
Some elders might feel that they are able to preserve a higher sense of dignity in interacting with robots who don’t understand that they are at their most vulnerable. Something that the elders might feel uncomfortable in sharing with family members and their caretakers.
The use of such robots can allow us the freedom to care for people by taking away some of the tedium, allowing us to engage in more meaningful interactions, and help to avoid burnout. But, some of the researchers interviewed in the article caveated that by saying that there is the potential for burnout of a different form that now requires the caretakers to be “dialled to an 11” all the time because there is no tedium to punctuate their interactions.
On the front of whether the elderly actually prefer the robots or not, they must have as unencumbered a choice as possible, mostly free from the influence that tech firms have in pushing people to purchase their products. On achieving this, if they prefer the companionship of robots over that of humans, there is still a way to respect their choices. This must be supplemented by routine checks and evaluations by healthcare professionals to ensure that they still get an adequate amount of human connection that is essential to maintain emotional well-being.
Why Getting Paid for Your Data Is a Bad Deal (Electronic Frontier Foundation)
We’ve covered 100s of articles in this newsletter, and this has to be amongst the top handful of articles that makes a really cogent argument on a subject that unfortunately garners attention in a very skewed manner. A lot of arguments on privacy protections associate a remuneration to support the provision of data by people to organizations. But, the EFF makes the well-reasoned claim that this leads to the commoditization of privacy. Additionally, there isn’t even a fair way to arrive at a compensation structure for your data that wouldn’t continue to provide huge advantages to the organizations to the detriment of individuals, especially those who are vulnerable. Providing an apt analogy with the freedom to speak, you would (at least rationally) not want a price tag on that.
One of the places where the argument for providing data dividends falls flat is in making the value determination of that data. Organizations using the data have the most insight on what the life-time value (LTV) of a piece of data is and hence they have an asymmetric advantage. Even if a third-party arbiter is to determine this, they still won’t be able to really ascertain the value of this data. Seemingly innocuous pieces of data, which people might be willing to accept pennies for, when combined with troves of data that organizations have, can really augment its value by orders of magnitude.
Some organizations like AT&T make offers to knock off a few dollars of the monthly bills of their subscribers if they would be willing to watch more targeted ads. This exploits exactly the most vulnerable people since they could benefit from saving a bit more money but in the process have to trade-in their privacy. Such an ecosystem that encourages data dividends necessitates the creation of a privacy class hierarchy, reinstituting larger existing social hierarchies. Thus, enacting privacy protections that are universal and don’t make compromises are essential if we’re to treat privacy as a right and not as a commodity. We need to move away from treating privacy as a luxury, affordable only to those who have the means to pay for it and leaving the rest behind as a second-class citizen.
AI research finds a ‘compute divide’ concentrates power and accelerates inequality in the era of deep learning (VentureBeat)
Something that is alluded to in the work from the Montreal AI Ethics Institute as well, the emphasis on AI models that require large amounts of compute are creating a divide along typical social fault lines in being able to do research and development in the domain of AI. The study cited in this article talks about how elite universities in the US significantly outcompete other universities in the access to compute resources they have and subsequently the kind of work that they are able to do.
Supplementing their analysis with the diminished diversity at some of the top-tier universities, the authors point out how such a compute-divide will also exacerbate societal inequities because of the kind of research and agendas that are peddled by those who have access to the largest amounts of resources, both in terms of data and raw computing power. Major technology companies also repeat this same pattern.
As a countervailing force, a call earlier in the US asked for the creation of shared data commons at a national level along with a National Public cloud that can democratize AI to the extent that we are inclusive of the perspectives of everyone, no matter their access to resources.
The UK Government Isn't Being Transparent About Its Palantir Contracts (Vice)
Palantir is notorious for the lack of transparency in how it operates. This has been demonstrated time and again in the US and now a report published in the UK shows that this pattern is being repeated there with some of the government contracts that they have secured there. A lot of freedom of information requests weren’t adequately responded to by the government shrouding these engagements in further secrecy.
Access to data held by the government is a really powerful instrument for firms looking to gain an edge when it comes to training their AI systems. The problem unsurprisingly is that such partnerships with data in the hands of the private sector without requisite accountability mechanisms is ripe for abuse of sensitive data belonging to citizens that can have many downstream harms that are unanticipated and unmitigated at the moment. Especially when the CEO of the company publicly declares that he doesn’t care much about it, it is a clear sign that we need to be extra cautious in demanding more transparency.
In a request for comments, Palantir gives non-committal answers that further exacerbate the problem. Even if anonymization is applied to the data, when looking at the implications of large-scale access, there are macro-trends that can still harm public welfare if inferences are drawn from that and weaponized against people, especially those who are marginalized. Under the GDPR, hiding behind the defense of merely being a data processor who looks to the data controller (the government in this case) is a cop-out that needs to be highlighted and properly addressed.
I Paid an AI to Get Me Instagram Followers. It Didn't Go Well (Vice)
Let’s take a slightly different look at the article, what really caught my attention in this were the trust issues and implications for what the future use of AI might look like when we have certain aspects of our existence that we relegate away to automation in the interest of pursuing other things. The crux of the article is that someone paid for a service to increase the number of Instagram followers for their account, it didn’t work out, but most importantly led to some pretty weird outcomes which raised more hassles than not - antithetical to the role that automation is supposed to play in making our lives easier.
Because activities undertaken on your behalf by the automated agent operate largely outside of your purview, given the sophistication of the technique, there are potential negative consequences that can arise: in this case, the bot that the author of the article had hired led to some nasty interactions with one of their exes because of some actions that the bot had taken on their behalf. This was despite the fact that there was a blacklist prohibiting certain actions. This goes to show that something that we had covered a few weeks ago on how effective guardrails can be is an important consideration in the fielding of AI systems.
This is also reminiscent of Google Duplex which was supposed to book hair salon appointments for us, sounding quite human in the process, to the extent of deceiving the other person into believing that they were interacting with another human. We are stuck in a place where such interfaces might be taking actions on our behalf without proper disclosure and violating well-established social norms and practices, potentially leading to more headaches and harm than good.
UX for AI: Trust as a Design Challenge (SAP Design)
Building on the subject of the previous article, what will it take to get ubiquitous adoption of intelligent assistants? Trust is a key design element, without which, as the article points out, we risk relegating the Alexas of the world to being nothing more than kitchen timers and occasional interfaces to pull up search results using voice.
The current failure in this process is that we perhaps have too high expectations from the system because of the way it is marketed to us. When failures occur, there aren’t any fallbacks that can help us get to a solution. Merely defaulting to a web search result betrays our confidence in the system, especially if the system isn’t able to take into account prior failures and how we would have preferred for it to respond. So a failure in both the “intelligence” of the system and the lack of a feedback mechanism that can make it more aligned to the needs of the user.
As an example, a human assistant wouldn’t just come up to you with a printed page of web results, but will mention some of the things that they tried and what worked and what didn’t - allowing us to guide them feedback on what is useful to us. Perhaps given our high expectations, the current crop of systems fail to meet that and dissolve some of the trust that we might have otherwise placed in these systems in actually assisting us rather than just being a new interface to perfectly time a boiled egg. From a design perspective, being fully transparent in its limitations is the way to go until we get to a place where the systems have much stronger capabilities that match our expectations.
From elsewhere on the web:
Ethics in the Use of AI in War
Our founder Abhishek Gupta will be speaking at the 19th Republic of Korea-United Nations Joint Conference on Disarmament and Non-Proliferation Issues. Here are the slides for his talk, about the ethics of AI in war.
Ethics, Technology and Innovation with Abhishek Gupta
In this episode of the Center Stage Podcast, Joe Cahill (COO at the Project Management Institute) interviews Abhishek Gupta (our founder). Abhishek incorporates findings from the The State of AI Ethics Report to walk the listener through examples of design and use cases that reflect that dichotomy of positive and negative social outcomes. This episode also proposes some practical steps organizational leaders can use for developing a framework for AI ethics.
Guest post:
If you’ve got an informed opinion on the impact of AI on society, consider writing a guest post for our community — just send your pitch to support@montrealethics.ai. You can pitch us an idea before you write, or a completed draft.
In case you missed it:
The State of AI Ethics Report (June 2020)
This Q2 2020 pulse-check for the state of discourse, research, and development is geared towards researchers and practitioners alike who are making decisions on behalf of their organizations in considering the societal impacts of AI-enabled solutions.
To delve deeper, read the full piece here.
Take Action:
MAIEI Learning Community
Interested in discussing the ethical challenges of AI in addressing some of the biggest ethical challenges of AI to develop interdisciplinary solutions with thinkers from across the world?
Our AI Ethics consulting services
In today’s market, the make-or-break feature for organizations using AI is whether they embody the principles of morality and ethics.
We want to help you analyze your organization and/or product offerings for moral and ethical weaknesses and make sure your business model is airtight. By undergoing a thorough, multidisciplinary review of your AI tool, we will provide ethical, social, and technical feedback on your work and research, which will allow you to proactively address your own blindspots and maximize your potential before ever undergoing a third-party ethics review.
Events:
As a part of our public competence building efforts, we host events frequently spanning different subjects as it relates to building responsible AI systems. We also share events from the broader AI ethics ecosystem.
The State of AI Ethics (Panel), hosted by us!
Topic: 2020 in review, from an AI Ethics perspective
Speakers: Rumman Chowdhury (Accenture), Danit Gal (United Nations), Katya Klinova (Partnership on AI), Amba Kak (NYU’s AI Now Institute), Abhishek Gupta (Montreal AI Ethics Institute). Moderated by Victoria Heath (Montreal AI Ethics Institute).
Date: Wednesday, December 2nd from 12:30 PM EST – 2:00 PM EST
Free tickets via Eventbrite: here!
Perspectives on the Future of Responsible AI in Africa, co-hosted by us & RAIN Africa!
Topic: What should be done NOW to prepare for the future?
Partner: RAIN-Africa, whose goal is to bring together emerging researchers to discuss and build joint projects on the ethical and social challenges arising at the interface of technology and human values.
Date: Monday, December 14th from 10:00 AM EST – 11:30 AM EST
Free tickets via Eventbrite: here!
Ethics & Social Responsibility Stage, part of Re-Work’s Deep Learning 2.0 Virtual Summit
Topic: Discover responsible and ethical approaches to developing AI for the common good
Speakers: Deborah Raji (Tech Fellow at AI Now Institute), Joel Lehman (Senior Research Scientist at Uber AI), Mryna MacGregor (BBC Lead for Responsible AI+ML), and more!
Date: Friday, January 29th from 11:00 AM EST – 2:30 PM EST, followed by networking
Discounted early bird tickets ($50 off): here!
Signing off for this week, we look forward to it again in a week! If you enjoyed this and know someone else that can benefit from this newsletter, please share it with them!
If someone has forwarded this to you and you want to get one delivered to you every week, you can subscribe to receive this newsletter by clicking below:
If you have feedback for this newsletter or think there’s something interesting we missed, email us at support@montrealethics.ai