AI Ethics #34: Wrong kind of AI, Industry AI Ethics 101, AI Safety, Security, and Stability Among Great Powers
The delight of discovering human hands in digital archives!
Welcome to another edition of the Montreal AI Ethics Institute’s weekly newsletter that will help you navigate the fast-changing world of AI Ethics! Every week we summarize the best of AI Ethics research and reporting, along with some commentary. More about us at montrealethics.ai/about.
Consider supporting our work through Substack
We’re trying a Guardian/Wikipedia-style tipping model: everyone will continue to get the same content but those who can pay for a subscription will support free access (to our newsletter & events) for everyone else. Each contribution, however big or small, helps ensure we can continue to produce quality research, content, and events.
NOTE: When you hit the subscribe button below, you will end up on the Substack page where if you’re not already signed into Substack, you’ll be asked to enter your email address again. Please do so and you’ll be directed to the page where you can purchase the subscription.
Support our work today with as little as $5/month
Photo by Kyle Glenn on Unsplash
This week’s overview:
Research summaries:
AI Safety, Security, and Stability Among Great Powers
Industry AI Ethics 101
The Wrong Kind of AI? Artificial Intelligence and the Future of Labour Demand
Article summaries:
Inside NSO, Israel’s billion-dollar spyware giant (MIT Tech Review)
Digital Archives Show Their Hand (Data & Society)
Ethical AI isn’t the same as trustworthy AI, and that matters (VentureBeat)
How regulators can get facial recognition technology right (Brookings)
Uganda is using Huawei’s facial recognition tech to crack down on dissent after anti-government protests (Quartz Africa)
China’s Surveillance State Sucks Up Data. U.S. Tech Is Key to Sorting It. (NY Times)
But first, our call-to-action of the week:
Watch The State of AI Ethics Panel!
If you missed our live event (735 attendees), worry not because you can catch up by watching the recording here. And by popular demand, we’ve prepared a PDF transcript of the live chat during the panel here, including a curated list of resources mentioned by audience members during the lively discussion.
Panelists included:
Danit Gal (Tech Advisor, United Nations)
Amba Kak (Director of Global Policy & Programs, NYU’s AI Now Institute)
Rumman Chowdhury (Global Lead for Responsible AI, Accenture)
Katya Klinova (AI & Economy Program Lead, Partnership on AI)
Abhishek Gupta (Founder, Montreal AI Ethics Institute)
Victoria Heath (Researcher, Montreal AI Ethics Institute)
Click here to watch the panel and access the chat transcript + associated report.
Research summaries:
AI Safety, Security, and Stability Among Great Powers by Andrew Imbrie, Elsa B. Kania
This paper takes a critical view of the international relationships between countries that have advanced AI capabilities and makes recommendations for grounding discussions on AI capabilities, limitations, and harms through piggybacking on traditional avenues of transnational negotiation and policy-making. Instead of perceiving AI development as an arms race, it advocates for the view of cooperation to ensure a more secure future as this technology becomes more widely deployed, especially in military applications.
To delve deeper, read our full summary here.
Industry AI Ethics 101 by Kathy Baxter
The design of ethical AI is no longer solely concerned with the technical, but is now intertwined with the ethical. Kathy Baxter (Architect of Ethical AI Practice at Salesforce) on the Radical AI podcast takes us through how we should now navigate this relationship, and reveals her experience of how to best do this in the business environment.
To delve deeper, read our full summary here.
The Wrong Kind of AI? Artificial Intelligence and the Future of Labour Demand by Daron Acemoglu, Pascual Restrepo
Daron Acemoglu and Pascual Restrepo, in this exploratory paper, delineate the consequences of future unchecked automation and present new avenues for the positive social development of AI labour technologies.
To delve deeper, read our full summary here.
Article summaries:
Inside NSO, Israel’s billion-dollar spyware giant (MIT Tech Review)
For those who haven’t heard of NSO, they are the company behind the Pegasus software that is utilized by law enforcement and other intelligence agencies around the world to target an individual by infecting their phone and gaining complete control over it. The article provides a fair amount of detail on the background of the company, what its software’s impact has been on people who are misidentified as terrorists or persons of interest. It also details their responses to allegations and cases that large technology companies are bringing against it when they utilize their infrastructure to mount attacks.
Describing the horrific events that someone in Morocco had to endure because their phone was compromised by Pegasus, the article also mentions cases in Catalan Spain and Mexico where the software has been used for different purposes. The legal counsel for NSO claims that they are only manufacturers of this software and don’t actually use it so shouldn’t be held liable for any harms that arise from it. This is essentially the same argument put forward by anyone that is building a tool that can be misused, of which there are many, many examples. It doesn’t really help them obviate their moral responsibility. Especially when the controls that are present at the moment, both technical and organizational at NSO seem to be insufficient to properly handle misuses. In particular what is concerning is the reactive stance taken to any allegations brought against them rather than being proactive in addressing the concerns.
A lot of their work is shrouded in secrecy because of its association with the Israeli national government which further limits the amount of information available to the public in terms of how it operates. The Israeli regulators also don’t hold themselves concerned for letting slip any misuses, which further diminishes the power of regulatory controls. They do have some technical guardrails in place, for example prohibition around infecting US phones and self-destruction of the software on US soil but those are few and far between. Calls from the global community for stronger regulation aren’t radically new and more than the financial damages, NSO having to go through the discovery process in legal proceedings might actually pose the bigger threat to their modus operandi.
Digital Archives Show Their Hand (Data & Society)
An article that really shines a bright light on how the digital world around us is constructed, especially as we need the digitization of physical objects to meet the voracious AI systems head-on. It is reminiscent of the work from Mary Gray’s Ghost Work, on the invisible labor that goes into creating magical AI systems (as MC Elish calls them) whereby the companies work tremendously hard to erase human “prints” away from the digitized objects to give us a highly sanitized view of the artifacts that we become accustomed to seeing online.
The article centres on the discovery of thumbs and hands holding open pages as they are digitized and what that means to the reader when they encounter that in digital archives. Drawing on some nostalgia (at least for those of us who remember having physical library cards with records of who borrowed the books before us), today’s readers are sometimes shocked to see such artifacts when in fact some of us actually used to enjoy knowing who had borrowed a particular book before us, if we knew them, and if that changed anything about how we might interact with it. It seems antiquated now of course, but there was a slight tinge of excitement in discovering marginalia (little notes in the margins), bookmarks, and other nick-nacks in a book that were left behind, perhaps accidentally, by previous readers of those books.
But, coming back to the impact that such digitization of archives has on our lives, it is important to account for and pay due consideration to those who put in the hard labor of making those accessible to us in the first place. Erasing labor through corrective digital mechanisms just so that we get a “clean” version is problematic because it obviates the very real humans who are behind all these wonders that we now get to enjoy and take for granted in our daily lives.
Ethical AI isn’t the same as trustworthy AI, and that matters (VentureBeat)
It would seem that every few weeks we get new terms that are used to define the domain of responsible AI. It encompasses within it many acronyms like ART, FATE, FAccTML, etc. This article takes a good stab at making a distinction between the notions of ethics and trust, something that is left ambiguous and unresolved in many early-stage discussions.
While ethics provide us guidance on what is right or wrong, trust is something that is established between two parties and their beliefs in each other. In particular, we see that even if a product is ethical, it might not evoke trust from its users, for example, if the company building that product has in the past engaged in untrustworthy behaviour. On the other hand, having trust in something can be misguided when the underlying application itself is unethical, say in the way that data was obtained to train the system without the consent of the users.
The article gives a few cases where this becomes apparent. In a survey with employees at SAS, they found that people trusted a healthcare AI system more than one that is used to make credit decisions about individuals. One possible explanation for this is that the healthcare domain is one where people assume that the best intentions of the patient are front-and-center. This might not always be the case with credit lending facilities. So, ultimately, the article does make a strong case for thinking of these as distinct concepts though they both do support each other and having one doesn’t necessitate the existence of the other but it certainly makes it a touch more likely.
How regulators can get facial recognition technology right (Brookings)
Facial recognition technology (FRT) gets a ton of mention in our newsletters over and over again because of the large downsides that it has in how it is deployed today compared to the benefits that it purports to offer. In this article, authors from the Stanford HAI Centre offer some concrete guidance for both technical and policy stakeholders to meaningfully regulate AI systems. Specifically, they look at the concepts of domain and institutional shift.
Domain shift is the notion that the performance of the system varies between what it was trained and tested on and how it behaves when the system is actually deployed and interacts with the real-world data. A lot of the training of these systems happens with well-sanitized inputs such as well-lit images, front-facing pictures but the real world has messy data where the lighting conditions are far from optimal and people might not be looking directly at the camera all the time. This has severe impacts on the performance of the system.
Institutional shift refers to how the system is actually used and how its outputs are interpreted by different organizations. An example of this is when you have police departments from different parts of the country that place different thresholds in terms of the confidence intervals required to flag someone as a match. This can have implications such as the high-profile case in the US earlier this year when someone was wrongly detained for 30 hours because of an incorrect match.
Combatting some of these challenges require a higher degree of transparency on the part of the manufacturers on what training data that they are using, allowing for 3rd party audits of their system and sharing those benchmarks publicly, and periodic recertification and assessments of the system so that we can verify that the system still meets the requirements and thresholds that we have mandated for it to operate in.
Uganda is using Huawei’s facial recognition tech to crack down on dissent after anti-government protests (Quartz Africa)
While most of the time we talk about the use of facial recognition technology (FRT) in the context of either China or the western world, this article sheds light on how this invasive piece of technology is being utilized in places outside of those two centres. An epitome of what we can imagine going wrong with the use of FRT is taking place in Uganda with the government actively using it to suppress anti-government sentiments amongst its people. It is also exploring the linking of the extensive data gathering from this instrument with other government agencies like the tax authorities and the immigration department which will further impinge on the rights of the people in Uganda.
Many local organizations have called for a ban on the use of the technology but that has had limited to no effect. Whereas in places like the US and UK, there is a backlash against the use of technology coming out of China that can be used for surveillance, Uganda has been welcoming of the support from these firms as they are helping them leapfrog older generations of technology.
This is a consequence of the uneven distribution of technology and the selective deployment of grants and resources to help nations which can push them into the arms of nation-state backed organizations that can subtly insert themselves into the affairs of a developing country with the aims of re-establishing colonialist patterns. Human rights violations are additional damage that these nations have to endure in this process.
China’s Surveillance State Sucks Up Data. U.S. Tech Is Key to Sorting It. (NY Times)
An extension of the previous discussion of sorts: technology transfers and sales have consequences even in nations that probably have sufficient internal capacity to develop advanced technology. While we might place a huge amount of scrutiny on the utilization of facial recognition technology in China, we often fail to recognize the underlying technology suppliers who are enabling that to take place. This is the classic example of dual-use technology like GPUs that can be used for examining protein folding using deep learning or serving the needs of an authoritarian state. But, in the case of both Nvidia and Intel supplying chips to Sugon, it seems that they were aware of what purposes the company would use the chips that they were selling to them.
In several pieces of marketing material from the firm, it was evident that they would use the technology for surveillance, in fact the suppliers also touted this particular case as a demonstration of the success of their chips, which they now retract by saying that they weren’t aware that it might be used to violate the privacy and human rights of the Uighurs. Technology accountability ends up taking a backseat in the interest of profits and without firmer export controls and policies, we risk continuing to perpetuate harms.
A particularly chilling mention in the article that one might easily gloss over is the supposed development of ethics guidelines within Intel on the use of their technology but those were neither made public nor are the people associated with it willing to disclose their identities or talk about it in more detail. This exacerbates concerns around where else such suppliers might be enabling malicious actors to inflict harm on people, mostly outside the watchful eye of regulators, both internally and externally.
From elsewhere on the web:
'There's a chilling effect': Google's firing of leading AI ethicist spurs industry outrage (Protocol)
This piece details the story of Google firing Timnit Gebru and its implications for ethical AI research within tech companies. Well-researched, and with quotes from Mutale Nkonde, Susan Etlinger, Abhishek Gupta, Alex Hanna, and Ellen Pao.
To delve deeper, read the full piece here.
Standing with Dr. Timnit Gebru (Google Walkout for Real Change)
“We, the undersigned, stand in solidarity with Dr. Timnit Gebru, who was terminated from her position as Staff Research Scientist and Co-Lead of Ethical Artificial Intelligence (AI) team at Google, following unprecedented research censorship…
Signed,
2040 Googlers and 2658 academic, industry, and civil society supporters”
To delve deeper, read the full piece here.
Guest post:
6 Ways Machine Learning Threatens Social Justice by Eric Siegel
When you harness the power and potential of machine learning, there are also some drastic downsides that you’ve got to manage. Deploying machine learning, you face the risk that it be discriminatory, biased, inequitable, exploitative, or opaque. In this article, I cover 6 ways that machine learning threatens social justice – linking to short videos that dive deeply into each one – and reach an incisive conclusion: The remedy is to take on machine learning standardization as a form of social activism.
To delve deeper, read the full piece here.
If you’ve got an informed opinion on the impact of AI on society, consider writing a guest post for our community — just send your pitch to support@montrealethics.ai. You can pitch us an idea before you write, or a completed draft.
In case you missed it:
The State of AI Ethics Report (Oct 2020)
This report captures the most relevant developments in AI Ethics since July of 2020. Our goal is to save you time by quickly getting you up to speed on what happened in the past quarter, by distilling the top research and reporting in the domain.
To delve deeper, read the full piece here.
Take Action:
MAIEI Learning Community
Interested in discussing the ethical challenges of AI in addressing some of the biggest ethical challenges of AI to develop interdisciplinary solutions with thinkers from across the world?
Our AI Ethics consulting services
In today’s market, the make-or-break feature for organizations using AI is whether they embody the principles of morality and ethics.
We want to help you analyze your organization and/or product offerings for moral and ethical weaknesses and make sure your business model is airtight. By undergoing a thorough, multidisciplinary review of your AI tool, we will provide ethical, social, and technical feedback on your work and research, which will allow you to proactively address your own blindspots and maximize your potential before ever undergoing a third-party ethics review.
Consider supporting our work through Substack
We’re trying a Guardian/Wikipedia-style tipping model: everyone will continue to get the same content but those who can pay for a subscription will support free access (to our newsletter & events) for everyone else. Each contribution, however big or small, helps ensure we can continue to produce quality research, content, and events.
NOTE: When you hit the subscribe button below, you will end up on the Substack page where if you’re not already signed into Substack, you’ll be asked to enter your email address again. Please do so and you’ll be directed to the page where you can purchase the subscription.
Support our work today with as little as $5/month
Events:
As a part of our public competence building efforts, we host events frequently spanning different subjects as it relates to building responsible AI systems. We also share events from the broader AI ethics ecosystem.
Perspectives on the Future of Responsible AI in Africa, co-hosted by us & RAIN Africa!
Topic: What should be done NOW to prepare for the future?
Partner: RAIN-Africa, whose goal is to bring together emerging researchers to discuss and build joint projects on the ethical and social challenges arising at the interface of technology and human values.
Date: Monday, December 14th from 10:00 AM EST – 11:30 AM EST
Free tickets via Eventbrite: here!
Signing off for this week, we look forward to it again in a week! If you enjoyed this and know someone else that can benefit from this newsletter, please share it with them!
If someone has forwarded this to you and you want to get one delivered to you every week, you can subscribe to receive this newsletter by clicking below:
If you have feedback for this newsletter or think there’s something interesting we missed, email us at support@montrealethics.ai