AI Ethics #30: Unintended memorization in neural networks, AI ethics in East Asia, and AI in clinical care
And did you know that in the Philippines, fake news can get you killed?
Welcome to the 30th edition of our weekly newsletter that will help you navigate the fast-changing world of AI Ethics! Every week we summarize the best of AI Ethics research and reporting, along with some commentary. More about us at montrealethics.ai/about.
Photo by Birmingham Museums Trust on Unsplash
This week’s overview
Research summaries:
The Secret Sharer: Evaluating and Testing Unintended Memorization in Neural Networks
Perspectives and Approaches in AI Ethics: East Asia
Repairing Innovation – A Study of Integrating AI in Clinical Care
Article summaries:
Big tech and free speech: Social media’s struggle with self-censorship (The Economist)
Technology is easier than ever to use — and it’s making us miserable (Digital Trends)
Activists Turn Facial Recognition Tools Against the Police (NYTimes)
The Man Who Helped Turn 4chan Into the Internet's Racist Engine (Vice)
In the Philippines, fake news can get you killed (Rest of World)
Artificial Intelligence Will Change How We Think About Leadership (Knowledge@Wharton)
But first, our AI Ethics Concept of the Week: ‘Differential Privacy’
Differential privacy enables AI to learn on a robust dataset while maintaining the privacy of individuals.
Learn more about the relevance of differential privacy to AI ethics and more in our AI Ethics Living dictionary.
Research summaries:
The Secret Sharer: Evaluating and Testing Unintended Memorization in Neural Networks by Nicholas Carlini, Chang Liu, Ulfar Erlingsson, Jernej Kos, Dawn Song
As neural networks, and especially generative models are deployed, it is important to consider how they may inadvertently expose private information they have learned. In The Secret Sharer, Carlini et al. consider this question and evaluate whether neural networks memorize specific information, whether that information can be exposed, and how to prevent the exposure of that information. They conclude that neural networks do in fact memorize, and it may even be necessary for learning to occur. Beyond that, extraction of secrets is indeed possible, but can be mitigated by sanitization and differential privacy.
To delve deeper, read our full summary here.
Perspectives and Approaches in AI Ethics: East Asia by Danit Gal
This research places the perceptions of AI and robots in South Korea, China, and Japan along a spectrum ranging from “tool to partner,” and further examines the relationships between these perceptions and approaches to AI ethics. The author also identifies three interrelated AI and robotics-related ethical issues: 1) female objectification, 2) the Anthropomorphized Tools Paradox, and 3) “antisocial” development.
To delve deeper, read our full summary here.
Repairing Innovation – A Study of Integrating AI in Clinical Care by Madeleine Clare Elish, Elizabeth Anne Watkins
This report follows the implementation of Sepsis Watch, an AI-powered medical diagnosis tool at Duke Hospital, and uncovers the interplay between technical infrastructure, human actors, and expert knowledge that allow for the successful integration of new medical technologies.
To delve deeper, read our full summary here.
Article summaries:
Big tech and free speech: Social media’s struggle with self-censorship (The Economist)
In some categories, automated content moderation on social media platforms is surprisingly effective, it takes down offending content even before a human has a chance to flag it. But, for the most part, this works on items that are egregious in their violation of the community standards and content policies of the platform. In earlier days, this problem was splintered across a series of platforms making the problem quite hard to address. With the agglomeration of most of our online activity, at least there are fewer platforms where such policing needs to be done well to have the largest impact.
Where things get dicey is when we have content that falls in the gray zone and companies have large incentives to not err in pulling down content lest they fall on the wrong side of the law. Of course, this comes with freedom of speech ramifications. So, while a judicial review can incorporate nuance, say in the case of a German citizen exercising his right to forget to have some prior information removed from Google search results, the companies on their own have no incentives to risk actions that can draw the ire of regulators.
Privatising freedom of speech comes with obvious problems and we need to be careful to what extent we are relegating control over to entities that are not beholden to the public good. An approach that companies have been adopting is to look at “freedom of reach” instead of “freedom of speech” by which they can reduce the prevalence of the content on the platform as a way of treading this fine line.
There are also concerns when companies are asked to take down content by authorities and off late we have seen large demands from organizations to hold the corporation accountable. Transparency on the number of requests received, their nature, and the actions taken will help to at least regain a portion of the trust from people.
Technology is easier than ever to use — and it’s making us miserable (Digital Trends)
Friction - a dreaded word in the world of technology design. So much effort is expended in making everything frictionless for our consumerist experiences. This stems from our desire to get everything at the touch of a button, or more accurately, at the touch of fewer buttons. Hence, the one-click ordering on Amazon, auto-play on YouTube, requesting a ride back home on Uber with a single tap after a night out on the town, and more.
But, the dark side of this removal of friction is that it leads to addictive behaviours that are subsumed in the convenience fading into the background of our digital existence. We rarely realize that we have spent many more minutes scrolling through photos on social media when all we wanted to do was to quickly look up how to center CSS elements on the webpage (yes, rejoice fellow web developers!).
But, friction is used ingeniously by platforms: in cases that they want to extract money from you, say getting a YouTube premium subscription, you will be shown “unskippable” ads twice in a row nudging you to make a decision to purchase the subscription to get rid of the friction. Or ads on Spotify every few minutes to disrupt your listening experience. On the other hand, when they want to collect as much data from you as possible, they can obscure privacy settings across a variety of pages so that you have trouble tuning the settings to your privacy desires.
The smoothening of our experiences on these platforms obfuscates the complexity that the platforms engage in to keep you addicted. Additionally, we forget to ask critical questions about it because the interface is so simple, what could possibly go wrong?
Activists Turn Facial Recognition Tools Against the Police (NYTimes)
An interesting take on demonstrating the power of facial recognition technology, using it against the very people who strive to oppress everyday people. This article documents the efforts of developers who are foraying into utilizing facial recognition technology to identify and act against authoritative figures who conceal their identities when they are engaging in suppression during protests for example.
In an interesting outcome on the ban of facial recognition technology in Portland in public spaces, the developer of this program wanted to check if his development of this system would be prohibited and it was deemed to not be the case because this was a private use of the technology.
As aptly summarized in the article, the lack of anonymity is not what strikes fear in the hearts of erring officers, it is the imposition of infamy through recognition by facial recognition technology. Power dynamics are fickle and for once, it seems that there is an opportunity for people to take back control from their oppressors.
The Man Who Helped Turn 4chan Into the Internet's Racist Engine (Vice)
A horrifying story for one of the darkest corners of the internet, this article takes a deep dive into how the 4chan platform went from being a community for discussions on everyday subjects to a cesspool of hate, bigotry, and everything wrong with humanity. It documents the rise of a single moderator who now controls what passes muster on the platform and has allowed it to spiral into a forum that creates and disseminates content that actively harms people’s well-being and the integrity of our democratic processes.
While other moderators on the platform have tried to rise up against him, for the most part, it has been ineffective because of the way the policies are specified and how they are enforced. For example, the rule against racist content on the platform is interpreted by this moderator as to only look at the intent of the poster and not the content. The intent is always guesswork and an easy way to exculpate people spreading hateful content. A consequence of this was that more people piled into this phenomenon and started flooding other boards on the platform with this content creating “gateways” for other users to this type of content.
This is a warning for other platforms that rely on human moderation to watch for signs of power accumulation and skewing of the moderation and review process on the platform. Small actions can have a cumulative effect taking a platform down the path from where it is impossible to bring back. 4chan now represents the worst that humanity has to offer and it has essentially been weaponized into an instrument for pushing particular political agendas by channeling the content from this unfortunately fertile ground to other platforms used more widely. The lesson to be learned from this is that technology is a powerful lever and people who are an intricate part of it do have choices that should be rightly exercised to steer the development of technology in a direction that is good for us all.
In the Philippines, fake news can get you killed (Rest of World)
The Philippines has been an interesting case study with press freedom and freedom of speech: extreme outcomes like lynching and death are a possibility if you’re suspected of spreading misinformation on social media. Since the rise of the new President, government instruments have been utilized to inflict death upon people who are dealing drugs, often in violation of human rights. In this article, we get a glimpse into what happens when someone is suspected of spreading fake news or is critical of the government.
One thing that is worth noting about the state of the information ecosystem is that Facebook is the primary gateway for people to access the internet. Facebook offers free access to their site without incurring data charges which makes the internet experience for a lot of people limited to their walled garden, often to the point of unawareness that there is an internet that exists outside of the Facebook platform. This also means that the information on Facebook has an outsized impact on the people in terms of what they learn about their country.
While the platform disengages from fact-checking political information on the platform, there is very subjective enforcement of community standards that are applied unevenly. Journalists are sometimes unfairly penalized: some veterans remember carrying toiletries with them in case they are captured and put in prison. After the Cambridge Analytica incident, Facebook has limited third-party access to the data on their platform, which has also made it hard for researchers to audit the platform for concerns. Often, they have to jump through many hoops just to get basic access. If we’re to combat this problem, we will need more transparency and accountability from the platform. Initiatives like the Oversight Board might seem like a good step in a positive direction but without real power, they might be just signalling of virtues without making any meaningful impacts.
Artificial Intelligence Will Change How We Think About Leadership (Knowledge@Wharton)
An interesting article that talks about how business leaders should think about the impacts that AI is going to have on their employees and how they operate their businesses. One of the places where the article falls short though unfortunately is an unnecessary focus on the soul and why that must play an important role in how an organization is governed. The soul is a highly value-laden term that differs widely across cultures and geographies which limits the applicability of the arguments made by the author of the book that is featured in the article. In addition, some of the technical concepts are wrongly defined which warrants a question on the importance of the technical savvy of the business leaders. While an explicit argument is made by the interviewed author that not all business leaders need to have deep technical knowledge but at least a passing understanding of how AI works, the way the concepts are described in the article actually lead to problematic conclusions that can misguide business folks reading the book and the interview.
Aside from some of the errors, there are some useful points made in the article around how both soft and hard skills complements will become even more important going into the future. Something that business leaders need to pay attention to is to be more cognizant of where they are getting their information from and how correct it is. If business strategy is to hinge on their understanding of some of these concepts, a flawed mental model will lead to erroneous results that will ultimately harm the organization rather than help them make the best of the potential of AI. To that end, we feel strongly that a resource like the State of AI Ethics Report does a better job of conveying some of the necessary concepts from an ethics, safety, and inclusion perspective when it comes to the labor and business impacts of AI.
From elsewhere on the web:
Cyberattacks against machine learning systems are more common than you think (Microsoft blog)
How are adversaries attacking ML systems today? The Adversarial ML Matrix will help you know what to look for in these increasingly common attacks. Our founder Abhishek Gupta contributed to this piece.
Decoded Reality (Art)
This is a creative exploration of the power dynamics that shape the design, development, and deployment of machine learning and data-driven systems. In this piece, we present (visually) dystopian realizations of how algorithmic interventions manifest in society, in an attempt to provoke the viewer to think critically about the socio-political underpinnings of each step of the engineering process. By Falaah Arif Khan (our Artist-in-Residence) and Abhishek Gupta (our founder).
In case you missed it:
The State of AI Ethics Report (Oct 2020)
Here's our 158-page report on The State of AI Ethics (October 2020), distilling the most important research & reporting in AI Ethics since our June report. This time, we've included exclusive content written by world-class AI Ethics experts from organizations including the United Nations, AI Now Institute, MIT, Partnership on AI, Accenture, and CIFAR.
To delve deeper, read the full report here.
Take Action:
Help us understand privacy preferences on social media in India
Privacy on social media has increasingly become an important issue for participation on the Internet. This survey aims to collect information on Indian social media users’ privacy attitudes and behaviors. Our four platforms of focus are Facebook, YouTube, LinkedIn, and Twitter. You can find out more about our research here.
MAIEI Learning Community
Interested in discussing the ethical challenges of AI in addressing some of the biggest ethical challenges of AI to develop interdisciplinary solutions with thinkers from across the world?
Our AI Ethics consulting services
In today’s market, the make-or-break feature for organizations using AI is whether they embody the principles of morality and ethics.
We want to help you analyze your organization and/or product offerings for moral and ethical weaknesses and make sure your business model is airtight. By undergoing a thorough, multidisciplinary review of your AI tool, we will provide ethical, social, and technical feedback on your work and research, which will allow you to proactively address your own blindspots and maximize your potential before ever undergoing a third-party ethics review.
Events:
As a part of our public competence building efforts, we host events frequently spanning different subjects as it relates to building responsible AI systems. We also share events from the broader AI ethics ecosystem.
Ethics in AI Panel (McGill AI Society)
Topic: General AI ethics
Speakers: Negar Rostamzadeh (Google Brain Ethical AI Team), Falaah Arif Khan (Artist-in-Residence at MAIEI), Golnoosh Farnadi (postdoctoral IVADO fellow)
Date: Tuesday November 17 from 6 PM EST – 7 PM EST
Zoom registration: here!
Submit questions & learn more about the speakers: here!
AI, Ethics, & Your Business (Springer Nature’s AI & Ethics Journal)
Topic: How AI and Ethics fit into the corporate landscape
Speakers: Dr. Amy Shi-Nash of HSBC, UK; Abhishek Gupta, Montreal AI Ethics Institute and Microsoft, Canada; Pavel Abdur-Rahman, IBM Canada; and Professor John MacIntyre, University of Sunderland, UK and Editor-In-Chief of Springer's AI and Ethics journal
Date: Thursday November 19 from 12 PM EST - 1 PM EST
Webinar registration: here!
Women in AI Ethics™ Asia-Pacific Summit (Women in AI Ethics™)
Topic: Current state of diversity + ethics in AI, and building meaningful action plans for progress
Speakers: Too many to name (see the full agenda here), including some of our own researchers
Date: Tuesday Nov. 10 from 6 PM EST - Wednesday Nov. 11 at 6 AM EST
Eventbrite tickets: here!
Signing off for this week, we look forward to it again in a week! If you enjoyed this and know someone else that can benefit from this newsletter, please share it with them!
If someone has forwarded this to you and you want to get one delivered to you every week, you can subscribe to receive this newsletter by clicking below:
If you have feedback for this newsletter or think there’s something interesting we missed, email us at support@montrealethics.ai