The AI Ethics Brief #43: Learning Community 2.0, India, China, large language models, and more ...
What happens when we start tailoring technology to meet the needs of the elderly?
Welcome to another edition of the Montreal AI Ethics Institute’s weekly AI Ethics Brief that will help you keep up with the fast-changing world of AI Ethics! Every week we summarize the best of AI Ethics research and reporting, along with some commentary. More about us at montrealethics.ai/about.
Support our work through Substack
💖 To keep our content free for everyone, we ask those who can, to support us: become a paying subscriber for as little as $5/month. If you’d prefer to make a one-time donation, visit our donation page. We use this Wikipedia-style tipping model to support the democratization of AI ethics literacy and to ensure we can continue to serve our community.
*NOTE: When you hit the subscribe button, you may end up on the Substack page where if you’re not already signed into Substack, you’ll be asked to enter your email address again. Please do so and you’ll be directed to the page where you can purchase the subscription.
This week’s overview:
What we are thinking:
From the Founder’s Desk: Introduction to ethics in the use of AI in war: Part 2
The Sociology of AI Ethics: Upgrading China Through Automation: Manufacturers, Workers and Techno-Development State (Research Summary)
Research summaries:
Understanding the Capabilities, Limitations, and Societal Impacts of Large Language Models
Unprofessional Peer Reviews Disproportionately Harm Underrepresented Groups in STEM
Article summaries:
Artificial Intelligence Has Yet To Break The Trust Barrier (Forbes)
China is home to a growing market for dubious “emotion recognition” technology (Rest of World)
China's Big Tech goes for the Big Grey (Protocol)
Clubhouse Is Suggesting Users Invite Their Drug Dealers and Therapists (OneZero)
A New Australian Law Is the Wrong Answer to Big Tech (OneZero)
Facebook Says “Technical Issues” Were the Cause of Broken Promise to Congress (The Markup)
But first, our call-to-action this week:
Apply to our upcoming cohort-based learning community!
By popular demand, we’re bringing back our learning community; but with a twist! This time, we’re selecting a cohort of 15 motivated individuals to learn and build in public. That includes reading papers about the most pressing ethical challenges facing AI and discussing them on weekly live-streamed seminars. In addition, members will contribute to a digital AI ethics library we’re creating for the public, and collaborate on a Community Insights Report we’ll publish & share with our wider community.
With our open learning model, we aim to cultivate a friendly learning community where individuals from a wide range of backgrounds will be encouraged to share their perspectives on the field.
15 people will be chosen to participate in this beta test, which will run throughout March and April. If you cannot take part this Spring, rest assured that future cohorts will be announced throughout the year.
📅 Deadline to apply: February 22nd (Monday) @ 11:45 PM EST.
✍️ What we’re thinking:
From the Founder’s Desk:
Introduction to ethics in the use of AI in war: Part 2 by Abhishek Gupta
Building on Part 1 of the article, let's dive into some more ideas when it comes to the discussion of ethics in the use of AI in war.
In Part 1, I had covered:
quick Basics that talked about autonomous weapons systems, semi-autonomy, full autonomy, lethal use, and non-lethal use
the potential advantages and costs
If you haven't had a chance to read the first part yet, I strongly encourage you to do so as we will build on the definitions that were explained in that part to discuss the issues in this one.
Let's dive into:
Current limitations of ethics principles
Key issues - Part 1
To delve deeper, read the full article here.
The Sociology of AI Ethics:
This research examines the politics of technology and automation in China. Specifically, the author investigates how different actors including a local government, electronics companies, and workers perceived and made decisions about automation under China’s techno-developmentalism. The author found that all actors embraced the national government’s developmental policies, and reproduce automation imaginaries, “which embrace the abstract notions of technological progress over the actual efficacy of automation, labour protection, and social equality.” However, in the process, low-skilled workers were marginalized.
To delve deeper, read the full summary here.
🔬 Research summaries:
Understanding the Capabilities, Limitations, and Societal Impacts of Large Language Models by Alex Tamkin, Miles Brundage, Jack Clark, and Deep Ganguli
The paper provides insights and different lines of inquiry on the capabilities, limtations and the societal impacts of large-scale language models, specifically in the context of the GPT-3 and other such models that might be released in the coming months and years. It also dives into issues of what constitutes intelligence and how such models can be better aligned with human needs and values. All of these are based on a workshop that was convened by the authors inviting participation from a wide variety of backgrounds.
To delve deeper, read the full summary here.
Unprofessional Peer Reviews Disproportionately Harm Underrepresented Groups in STEM by Nyssa J. Silbiger, Amer D. Stubler
The peer review process is integral to scientific research advancement, yet systematic biases erode the fair assessment principle and carry downstream effects on researchers’ careers. Silbiger and Stubler, in their international survey among researchers in STEM, demonstrate how unprofessional peer reviews disproportionately harm and perpetuate the gap among underrepresented groups in the field.
To delve deeper, read the full summary here.
📰 Article summaries:
Artificial Intelligence Has Yet To Break The Trust Barrier (Forbes)
As AI becomes even more pervasively deployed, the question of trust keeps popping up again. The study referenced in this article talks about where trust in machines is higher and subsequently what might be the best place to deploy it. For example, based on the study conducted, the authors found that people trusted machine recommendations more when they were utilitarian, say picking a computer (how meta!). But, for experiential things like flower arrangements, they tended to side with human recommendations.
Coining a new phrase “word-of-machine” they seek to explain why machine recommendations are preferred when it comes to utilitarian items vs. those that are experiential. But, they say that not all hope is lost for those seeking to make recommendations using machines for experiences. Having hybrid teams where there is a human-in-the-loop to further handpick recommendations from those that have been shortlisted by a machine might be the way to go.
This is reminiscent of the Guardian article that was written by GPT-3, only after human editors picked from 5-6 outputs generated by the system; perhaps not all that different from the process that human writers undergo with their human editors at the moment.
China is home to a growing market for dubious “emotion recognition” technology (Rest of World)
Facial recognition technology and emotion recognition are technologies that have gained a lot of attention in the past 18 months. Rightly so given all the negative consequences that arise from the limits of this technology at the moment. In this article, borrowing from work done recently by Article 19, a British organization, we get a peek into the pervasiveness of emotion recognition in classrooms in China. The limitations of systems attempting to recognize emotions by scanning physiological expressions are well-known. One reason for those limitations is the wide variation in how emotions are expressed based on culture and region — facial expressions often do not carry the same meanings universally.
Yet, in highly competitive environments like China and India where the school system offers a way to an improved standard of living, companies peddling these systems pander to the anxieties of parents to get schools to purchase and deploy these systems. With the pandemic in full swing, even in countries like the US, such systems have been used experimentally in school with the intention of boosting the productivity of students and teachers in the classroom.
Such systems suffer from similar biases and failures like the facial recognition technology systems as covered many times before in The AI Ethics Brief; hence, our vigilance and awareness of their shortcomings should be channelled into advising schools and other places from treading very carefully in the use of untested technology in the classroom on those who are unable to offer informed consent.
China's Big Tech goes for the Big Grey (Protocol)
Technology is quite ageist at the moment and current design practices might not always take the needs of the elderly into account. A few startling examples quoted in the article talk about the spending power of this demographic, making a strong business case for including design elements that address the needs of the elderly. This demographic’s rate of saving is one of the highest, as is their spending power — but companies are only realizing that now, at least in China; the rest of the world has some catching up to do. Reports by Alibaba and other companies all attest to this trend.
Dubbed the “silver hair economy” in China, the article makes a valid point that for local companies this demographic is perhaps the last bastion of new users that they can reach as they have saturated all other user bases within the Chinese market. Apps like Meipian and Tangdou that offer easy ways to collage photos and teach elderly women to dance respectively raised serious funding showing how the market is ripe for these opportunities.
The article also bears reading for those in markets outside of China as we are seeing a gradual shift in the demographic composition of the rest of the world as well as it inches towards older folks. Instead of perceiving them as technologically less competent, re-envisioning design practices and meeting head-on their needs will boost the ability of technology to uplift the lives of all around us. Sociologists though warn that profit motives and opportunism in this space can have negative effects like privacy invasion, especially for a demographic that might not be fully aware of how their data is being used, or might not have the ability to consent properly in the face of declining cognitive ability. Something to keep in mind as we try to make technology accessible to all.
Clubhouse Is Suggesting Users Invite Their Drug Dealers and Therapists (OneZero)
Clubhouse is the talk of the (virtual) town and even hardened privacy-minded folks have succumbed to the allure. I am guilty of the same when I saw that some other AI ethics folks were hosting conversations there (ironic!) and I wanted to listen in. Luckily for me, I don’t have an iPhone so I am unable to join. But, after reading this article, I felt that it was a bullet dodged. The article highlights some of the insidious ways that information about your social graph is leaked once you join the app.
It employs dark patterns like strong nudges to get you to share your contact list from your phone, something that you can choose not to do, but then it strips away your ability to send out more invites. Additionally, it shows you a ranked list of your contacts in terms of how many other people on Clubhouse are also connections there. This can lead to exposing more data than one would like, for example, how many other people have the same lawyer, doctor, etc. as you.
Another interesting thing pointed out by the author in the article is that a lot of folks who were going to join the app may have presumably done so by now (the more tech- and marketing-savvy folks) and those who are still listed as the top potential people that you should invite are those who are ardently refusing to do so and might give more information about them. Just as Facebook many years ago built shadow profiles on users who were not on the app but through the others on the app and their related contacts it was able to infer things about this person, Clubhouse now has the same ability. Until we get more information about their data storage practices and how they comply with lawful intercepts, perhaps it is best to stay away from such an app that has severe privacy issues.
A New Australian Law Is the Wrong Answer to Big Tech (OneZero)
Australia has put forward a new regulation called News Media and Digital Platforms Mandatory Bargaining Code that is meant to reduce the influence of large platforms that drive a lot of traffic around news media and affect the bottom line of publications. The article makes some interesting points around how the regulation in its current form misses the key problems facing these publications in the first place.
While it is great that the regulation forces large companies like Google and Facebook to come to the negotiating table, it does that by asking the platforms to pay for any snippets and content that might be featured on their platforms. This is problematic because it restricts how such links can be shared in the first place and the amount of traffic that they are responsible for driving to the news media websites in the first place. Pulling the plug on that would mean that the news media websites would have to be responsible for finding all that traffic on their own organically or through ads, something that might further injure their bottom line. It is also antithetical to the ethos of the web in terms of how links are shared openly and might penalize some smaller companies that do link out to such news media articles.
The one that it does do right is that it potentially creates a more level playing field in terms of news media organizations not having to cower down to the proprietary formats and requirements that the platforms like Google and Facebook impose, for example, Google AMP, which pose a burden for content producers and service the needs of the platforms by locking them into their distribution networks. They do offer more prominent features for that content but at the cost that might be too much to bear for everyone save the largest organizations. Ultimately, going back to the more open ethos of the internet might just be the way that we can solve some of these emergent issues.
Facebook Says “Technical Issues” Were the Cause of Broken Promise to Congress (The Markup)
Through the Citizen Browser project, The Markup has been able to surface data on the level of recommendation activity from Facebook when it comes to political groups and their growing membership. This is something that was close to Election Day last year promised by Facebook as something that would be reduced given all the calls from Congress and the Senate in the US to ensure more fair elections, at least from an information diffusion standpoint.
But, as the results from this analysis show, Facebook has failed to deliver on that promise, pointing to technical issues as the primary culprit. The Citizen Browser is a unique initiative that gives insights into the kind of content that is recommended to people, something that is not possible to do otherwise because of the closed nature of the platform. While the results and the data from the study are linked in the article (and I encourage you to check them out), what is startling is that a small initiative like this from The Markup is able to unearth these problems and a well-funded team at Facebook is unable to spot and address them despite access to a large amount of data and resources.
Political polarization is a persistent problem that is harming the fundamental tenets of democracy and there isn’t much that is being done by the platforms just yet in terms of weeding that out and creating a more healthy information ecosystem. There are justified concerns in terms of potentially snuffing out grassroots initiatives that use the platform to mobilize action but at what cost? Perhaps, the inherent structure of the platforms and their associated incentives are the biggest culprits in the first place.
From our Living Dictionary:
Example of why AI ethics matters for Facial Recognition Technology (FRT)
In 2019, the Liberty group took the South Wales Police (SWP) to court in the UK over breaching the Data Privacy Act and Equality Act through their use of their automated (identification) FRT. The Supreme Court then ruled that they had followed the required legal frameworks, and did not use FRT arbitrarily, initially ruling in favour of the SWP. However, Liberty then appealed and the SWP’s use was deemed unlawful in July 2020, finding that the SWP had not conducted an adequate data impact assessment and had not sufficiently checked for racial and gender bias in their tech.
👇 Learn more about the relevance of FRT and more in our Living dictionary.
Explore the Living Dictionary!
From elsewhere on the web:
The urgent need for regulating global ghost work (Brookings)
Alexandrine Royer from our team wrote this op-ed for Brookings recently, explaining where the gig economy went wrong and why we need to regulate ghost work.
Top AI & Data Science Newsletters On Substack (Analytics India Magazine)
If you’re looking for great AI-related newsletters, here’s a list of 7 great Substacks.
Guest post:
Re-imagining Algorithmic Fairness in India and Beyond by Nithya Sambasivan, Erin Arnesen, Ben Hutchinson, Tulsee Doshi, Vinodkumar Prabhakaran
The authors argue that several assumptions of algorithmic fairness are challenged in India. The distance between models and dis-empowered communities is large, and a myopic focus on localising fair model outputs alone can backfire in India. They point to 3 themes that require us to re-examine ML fairness: data & model distortions, double standards & distance by ML makers, and unquestioning AI aspiration.
To delve deeper, read the full summary here.
If you’ve got an informed opinion on the impact of AI on society, consider writing a guest post for our community — just send your pitch to support@montrealethics.ai. You can pitch us an idea before you write, or a completed draft.
In case you missed it:
Toward Fairness in AI for People with Disabilities: A Research Roadmap by Anhong Guo, Ece Kamar, Jennifer Wortman Vaughan, Hanna Wallach, Meredith Ringel Morris
In this position paper, the authors identify potential areas where Artificial Intelligence (AI) may impact people with disabilities (PWD). Although AI can be extremely beneficial to these populations (the paper provides several examples of such benefits), there is a risk of these systems not working properly for PWD or even discriminating against them. This paper is an effort towards identifying how inclusion issues for PWD may impact AI, which is only a part of the authors’ broader research agenda.
To delve deeper, read the full summary here.
Take Action:
Apply to our upcoming cohort-based learning community!
By popular demand, we’re bringing back our learning community; but with a twist! This time, we’re selecting a cohort of 15 motivated individuals to learn and build in public. That includes reading papers about the most pressing ethical challenges facing AI and discussing them on weekly live-streamed seminars. In addition, members will contribute to a digital AI ethics library we’re creating for the public, and collaborate on a Community Insights Report we’ll publish & share with our wider community.
With our open learning model, we aim to cultivate a friendly learning community where individuals from a wide range of backgrounds will be encouraged to share their perspectives on the field.
15 people will be chosen to participate in this beta test, which will run throughout March and April. If you cannot take part this Spring, rest assured that future cohorts will be announced throughout the year.
📅 Deadline to apply: February 22nd @ 11:45 PM EST.
Events:
The state of AI ethics in Spain and Canada (El estado de la ética IA en España y Canadá)
We’re partnering with OdiseIA to discuss the state of AI ethics in Canada and in Spain. The discussion will span topics including country-specific regulations, commonalities across both countries, and the type of federal policies that will be needed to move the needle.
📅 February 26th (Friday)
🕛12 PM - 1:30 PM EST
🎫 Get tickets