The AI Ethics Brief #57: Race and Digital Society, AI winter to AI hype, privacy in China, and more ...
How can humans and AI collaborate in artistic performance?
Welcome to another edition of the Montreal AI Ethics Institute’s weekly AI Ethics Brief that will help you keep up with the fast-changing world of AI Ethics! Every week we summarize the best of AI Ethics research and reporting, along with some commentary. More about us at montrealethics.ai/about.
⏰ This week’s Brief is a ~12-minute read.
Support our work through Substack
💖 To keep our content free for everyone, we ask those who can, to support us: become a paying subscriber for as little as $5/month. If you’d prefer to make a one-time donation, visit our donation page. We use this Wikipedia-style tipping model to support the democratization of AI ethics literacy and to ensure we can continue to serve our community.
*NOTE: When you hit the subscribe button, you may end up on the Substack page where if you’re not already signed into Substack, you’ll be asked to enter your email address again. Please do so and you’ll be directed to the page where you can purchase the subscription.
This week’s overview:
✍️ What we’re thinking:
The Sociology of Race and Digital Society
The Future of Teaching Tech Ethics
🔬 Research summaries:
On Human-AI Collaboration in Artistic Performance
From AI Winter to AI Hype: The Story of AI in Montreal
📰 Article summaries:
US-China tech war: Beijing's secret chipmaking champions (Nikkei Asia)
Chinese Users Do Care About Online Privacy (Protocol)
AI and Climate Change: The Promise, the Perils and Pillars for Action (ClimateAction.tech)
Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites (The Markup)
But first, our call-to-action this week:
[Happening TOMORROW] Register for The State of AI Ethics Panel (May 26th)
Now that we’re nearly halfway through 2021, what’s next for AI Ethics? Hear from a world-class panel, including:
Soraj Hongladarom — Professor of Philosophy and Director, Center for Science, Technology and Society at Chulalongkorn University in Bangkok (@Sonamsangbo)
Dr. Alexa Hagerty — Anthropologist, University of Cambridge’s Centre for the Study of Existential Risk (@anthroptimist)
Connor Leahy — Leader at EleutherAI (@NPCollapse)
Stella Biderman — Leader at EleutherAI(@BlancheMinerva)
Abhishek Gupta - Founder, Montreal AI Ethics Institute (@atg_abhishek)
📅 May 26th (Wednesday)
🕛 12:00PM – 1:30PM EST
🎫 Get free tickets
✍️ What we’re thinking:
Sociology and AI Ethics:
The Sociology of Race and Digital Society
Tressie McMillian Cottom brings together the concepts of platform capitalism and racial capitalism to study how modern-day economic changes wrought by digital technology are reshaping ethnicity, race, and racism. She explores how ideas of race and racial relationships and inequalities are produced and reproduced as more and more of our social lives are mediated online. She argues that by privatizing these interactions the Internet obscures much of these racialized relationships between producers and consumers and that the most vulnerable in society are brought into the fold but usually on exploitative terms.
To delve deeper, read the full article here.
Office Hours:
The Future of Teaching Tech Ethics
How do you see the Tech Ethics Curriculum landscape evolve in the next 5 years? In this column, 3 experts tackle this and other important questions about the development of an effective, inclusive and comprehensive Tech Ethics Curriculum for the future.
To delve deeper, read the full article here.
🔬 Research summaries:
On Human-AI Collaboration in Artistic Performance
How can AI systems enhance the interactions between a human and an artificial performer to create a seamless joint performance? In this paper, researchers create a new model for human-AI collaboration using an AI system to mediate between the agents and test the model in three different performance scenarios.
To delve deeper, read the full summary here.
From AI Winter to AI Hype: The Story of AI in Montreal
This paper explores the emergence of AI as a new industry in Montreal, Canada. It highlights the key roles that different actors (i.e., individuals/organisations/institutions) played individually and collectively over three decades in creating the thriving AI ecosystem that put Montreal on the world AI map.
To delve deeper, read the full summary here.
📰 Article summaries:
US-China tech war: Beijing's secret chipmaking champions (Nikkei Asia)
What happened: A range of Chinese chip manufacturers have embarked on a gargantuan effort to trace their entire supply chain to figure out the provenance of their bill of materials. This is in response to a call to develop self-sufficiency fueled by the ongoing tensions between the US and China, especially as companies get put on the US Entity List that prohibits US companies from supplying to Chinese manufacturers. This has had ripple effects whereby companies from other countries are on tenterhooks in supplying these companies. In addition, given the immense complexity of the chip manufacturing supply chain coupled with extreme concentration of manufacturing equipment and other raw materials in different parts of the world, for the foreseeable future, a clean decoupling is going to be really hard. This is not just from the Chinese perspective, but also from a US perspective, where China still accounts for a large fraction of the supply of raw materials for chip manufacturing.
Why it matters: From the Chinese perspective, calls for self-sufficiency have been made by the government for many years, the current trade tensions only accelerate that trend. It also provides the efforts of the government with some cover to boost local initiatives and bring home some of the knowledge and expertise required to successfully produce chips, arguably the hottest commodity given the rising pace of digitalization. From a US perspective, the decoupling is seen as something of strategic value where they want to have “neck-choking” capabilities to achieve other strategic aims using choke-points in the supply chain to negotiate.
Between the lines: A part of this development trend is that we will get more distributed knowledge and potentially increased competition in the very concentrated nature of the supply chain today. On the other hand, perhaps there will also be a limited sharing of knowledge if countries begin to realize that they need to shore up their domestic competence in case the world becomes more siloed and a country loses its technological edge because of a lack of access to chips which have become a flashpoint in the technological ecosystem.
Chinese Users Do Care About Online Privacy (Protocol)
What happened: The trope that gets tossed around quite a bit is that privacy is a second-class citizen to convenience in China. Yet, a number of cases, as highlighted in this article, show that Chinese users, companies, and even the government do care about privacy. In a momentous case, someone sued a national park for requiring facial recognition to enter the premises and won. Now, inspired by the push that the EU has done on privacy regulation, there is a possibility that China gains some privacy laws that end up being stronger than elsewhere in the world.
Why it matters: While the bill is still in draft mode, and it could end up being watered down, this is a positive sign that policy and regulation changes in different parts of the world can have an impact in other places. The tech governance and development ecosystem has the ability to co-evolve in a way that ultimately brings benefits to the users. It requires careful shepherding to get there though.
Between the lines: Having articles that highlight balanced viewpoints is important. It is easy to fall into hype and tropes that skew meaningful public discussion. When it comes to different tech ecosystems around the world, speaking with local scholars and folks on the ground will help us gain a better understanding of what actually matters to those people rather than imposing our assumptions on how that system operates.
AI and Climate Change: The Promise, the Perils and Pillars for Action (ClimateAction.tech)
What happened: Time and again, AI has been positioned as a saviour for our climate change woes through its use in better freight allocation, precision agriculture enabling better use of resources like land and water, better weather forecasting, and optimal heating and cooling of buildings to save energy among other needs. Yet, this often underemphasizes the environmental impact of AI itself and also sometimes overstates the actual impact that the introduction of this technology can have in achieving these goals.
Why it matters: A careful analysis of the actual benefits that this technology can bring along with the side effects such as the high energy consumption of training and then running inference from these models should be made more explicit and transparent. As the author points out: having more enabling technical environments that normalize carbon considerations in the use of this technology is a great first step. This can help to integrate with other efforts on climate justice leveraging the expertise and progress from those initiatives.
Between the lines: Ultimately, the use of AI is not a replacement for other climate change efforts. It is a tool that can aid us in those efforts but should be done so under the guidance of domain experts. Sharing this expertise through grassroots collaborations are another way that such efforts can be made more potent.
Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites (The Markup)
What happened: Cookies are used to track users, and data associated with website visits, potentially collated with other sources can lead to rich profiles about any individual. An investigation by The Markup revealed that the COVID-vaccination website run in Nevada had more trackers in it than 46 of the lowest state websites combined! But, the creators of the website changed this, removing outdated cookies, after being shown the results from the investigation. Nevada is also putting forth privacy legislation that has to follow privacy guidelines when they are in public service or have been contracted to provide public service.
Why it matters: While cookies can enable functionality on websites, the more invasive kinds of cookies track and share information that can lead to privacy violations galore. More so, given complicated prompts to choose settings, users are often perplexed and end up leaving them enabled even when they don’t want to. Such privacy legislations that target at least those websites and services that are run for the general public can help alleviate part of the user burden in this regard.
Between the lines: Tools built by The Markup like Blacklight that was used to run the analysis on these COVID-vaccination websites are essential in our fight to protect our liberties and rights. They require an investment to develop but once created they can become powerful instruments in unearthing things like privacy violations in a systematic and scalable manner. We need to encourage more development of such tools, especially when they are in the open-source domain and accessible to everyone.
In case you missed it:
Diagnosing Gender Bias In Image Recognition Systems
This paper examines gender biases in commercial vision recognition systems. Specifically, the authors show how these systems classify, label, and annotate images of women and men differently. They conclude that researchers should be careful using labels produced by such systems in their research. The paper also produces a template for social scientists to evaluate those systems before deploying them.
To delve deeper, read the full report here.
Take Action:
Events:
The Triangle of Trust in Conversational Ethics and Design: Where Bots, Language and AI Intersect
We’re partnering with Salesforce to host a discussion about conversational ethics and design.
Conversational AI enables people to communicate via text or voice with automated systems like smart speakers, virtual assistants, and chatbots. Leveraging Automatic Speech Recognition (ASR) and Natural Language Processing (NLP), these systems can recognize speech, understand context, remember previous dialogue, access external knowledge, and generate text or speech responses.
However, conversational AI may not work equally well for everyone, and may even cause harm due to known or unknown bias and toxicity. Additionally, generating “personalities” for bots or virtual assistants creates risks of appearing inauthentic, manipulative, or offensive. In this workshop, we will discuss the issues of bias, harm, and trust where bots, language, and AI intersect.
📅 June 10th (Thursday)
🕛 12:00PM – 1:30PM EST
🎫 Get free tickets
Register for The State of AI Ethics Panel (May 26th)
Now that we’re nearly halfway through 2021, what’s next for AI Ethics? Hear from a world-class panel, including:
Soraj Hongladarom — Professor of Philosophy and Director, Center for Science, Technology and Society at Chulalongkorn University in Bangkok (@Sonamsangbo)
Dr. Alexa Hagerty — Anthropologist, University of Cambridge’s Centre for the Study of Existential Risk (@anthroptimist)
Connor Leahy — Leader at EleutherAI (@NPCollapse)
Stella Biderman — Leader at EleutherAI(@BlancheMinerva)
Abhishek Gupta - Founder, Montreal AI Ethics Institute (@atg_abhishek)
📅 May 26th (Wednesday)
🕛 12:00PM – 1:30PM EST
🎫 Get free tickets