AI Ethics Brief #68: Combatting anti-blackness in AI, corporate governance of AI, $10 for your palm prints, and more ...
How do we fight vaccine lies online?
Welcome to another edition of the Montreal AI Ethics Institute’s weekly AI Ethics Brief that will help you keep up with the fast-changing world of AI Ethics! Every week we summarize the best of AI Ethics research and reporting, along with some commentary. More about us at montrealethics.ai/about.
⏰ This week’s Brief is a ~11-minute read.
Support our work through Substack
💖 To keep our content free for everyone, we ask those who can, to support us: become a paying subscriber for as little as $5/month. If you’d prefer to make a one-time donation, visit our donation page. We use this Wikipedia-style tipping model to support the democratization of AI ethics literacy and to ensure we can continue to serve our community.
*NOTE: When you hit the subscribe button, you may end up on the Substack page where if you’re not already signed into Substack, you’ll be asked to enter your email address again. Please do so and you’ll be directed to the page where you can purchase the subscription.
This week’s overview:
🔬 Research summaries:
Combatting Anti-Blackness in the AI Community
Corporate Governance of Artificial Intelligence in the Public Interest
📰 Article summaries:
Amazon will pay you $10 in credit for your palm print biometrics
To Fight Vaccine Lies, Authorities Recruit an ‘Influencer Army’
Optimizing People You May Know (PYMK) for equity in network creation
Let’s Keep the Vaccine Misinformation Problem in Perspective
But first, our call-to-action this week:
The State of AI Ethics Report (Volume 5) captures the most relevant developments in AI Ethics since the first quarter of 2021. We’ve distilled the research & reporting around 3 key themes:
Creativity and AI
Environment and AI
Geopolitics and AI
We also have our evergreen section Outside the Boxes that captures insights across an eclectic mix of topic areas for those looking for a broad horizon of domains where AI has had a societal impact. And we bring back the community spotlights to showcase meaningful work being done by scholars and activists from around the world.
This edition opens with a section that has been much requested by you, our community, titled “What we’re thinking” that gives insights into emergent trends and gaps that we’ve noticed in existing coverage of AI ethics. We also have a special contribution titled “The Critical Race Quantum Computer: A Tool for Liberation” by Michael Lipset, Jessica Brown, Michael Crawford, Kwaku Aning, & Katlyn Turner with a very intriguing framing of how we think about race and technology.
🔬 Research summaries:
Combatting Anti-Blackness in the AI Community
Racism has the potential to establish itself in every corner of society, with the AI community being no different. With a mix of observations and advice, the paper harbours a need for change alongside the potential for the academic environment to manifest it. While some of the steps involved carry risk, the danger of not doing so is even greater.
To delve deeper, read the full summary here.
Corporate Governance of Artificial Intelligence in the Public Interest
How can different actors improve the corporate governance of AI in the public interest? This paper offers a broad introduction to the topic. It surveys opportunities of nine types of actors inside and outside the corporation. In many cases, the best results will accrue when multiple types of actors work together.
To delve deeper, read the full summary here.
📰 Article summaries:
Amazon will pay you $10 in credit for your palm print biometrics
What happened: Amazon is rolling out payments in their physical stores using a contactless palm scanner and are offering $10 in store credit to those who enrol in the service. Contactless payments do seem attractive in a pandemic but there are the obvious cybersecurity concerns, especially those of data privacy of immutable personally identifiable information, that is, your biometrics which cannot be changed unlike your phone number or home address.
Why it matters: Amazon doesn’t have a great track record when it comes to the use of biometrics, we all remember their Rekognition program and the ensuing problems, including inaccuracies in the inferences generated by that system. In addition, they are a private company which can turn around and potentially sell this data to data brokers who can collate that with other information about you that is floating around on the internet to disastrous consequences. My paper from 2018 “The Evolution of Fraud: Ethical Implications in the Age of Large-Scale Data Breaches and Widespread Artificial Intelligence Solutions Deployment” talked about some horrifying consequences that might arise from leaked biometric data.
Between the lines: While the headline is sensationalist in its presentation of the scenario, namely that it brings focus to the monetary aspect of how much customers will be compensated for the use of biometrics, it detracts from the more important issue of how and when biometrics should be used and what regulations we need to develop to ensure their safe usage.
To Fight Vaccine Lies, Authorities Recruit an ‘Influencer Army’
What happened: While most of the time in this publication we talk about the negative effects of social media and the spread of disinformation and misinformation that it facilitates, this article highlights a great example of government effort to recruit the power of influencers in spreading “positive information” to get people vaccinated. The White House is working with influencers on TikTok, YouTube, and other platforms to get them to share the message of vaccination with their large follower bases.
Why it matters: The vaccination rates in the US have been higher in the older demographic than the younger ones. This is the exact audience that can be reached through influencers on social media. As pointed out by a survey cited in the article, people tend to be better persuaded by content creators that they watch / listen to on social media than other publication outlets. This is then the perfect channel to quash rumors and answer questions about vaccination, urging people to go out there and get the jab.
Between the lines: Borrowing on tactics that were used for political mobilization during the Biden campaign, the White House is now using the same insights and approach to get an important message out to the people to get vaccinated. As DiResta points out in the article that those looking to spread disinformation are more motivated and organic reach can exceed targeted measures like these, it is still a good first step in countering “negative information” with some action rather than just trying to suppress misinformation and disinformation on these platforms. A multi-pronged approach will always be more effective in countering pervasive problems like this one.
Optimizing People You May Know (PYMK) for equity in network creation
What happened: LinkedIn has applied two fairness measures of equality of opportunity and equalized odds to make the recommendations for potential connections more equitable across the users of the platform, especially for those who don’t have as “influential” profiles as some of the more frequent members (FM) of the platform. In applying these fairness measures on top of their ranking algorithms, they’ve found that engagement on the platform didn”t go down, showing that fairness objectives don’t necessarily have to stand against the business objectives.
Why it matters: All social media platforms are prone to having bias in terms of the benefits of the platform skewing towards those who have amassed influence both on and off the platform. Creating opportunities for newer entrants is a way of more fairly distributing the opportunities on the social media platform, as is shown in this article for LinkedIn where recommendations are now being made of those profiles that have fewer pending requests amongst some other balancing factors.
Between the lines: As social media platforms take on a more pervasive presence in our lives, those that seek to aid the less influential participants of that ecosystem have the potential to become more prominent. In a sense, we have seen this happen with TikTok where even new entrants on the platforms have a chance to quickly amass huge followings compared to other platforms where having a large following predisposes your presence and dominance in the newsfeed of that platform.
Let’s Keep the Vaccine Misinformation Problem in Perspective
What happened: An insightful article that talks about the complexities in trying to disentangle the effects of misinformation, which are myriad, from that of other confounding factors when it comes to the low rates of vaccination amongst certain demographics in the United States. In a study cited in the article, they find that there is a strong correlation between vaccine hesitancy and a general mistrust in mainstream institutions which pushes these people towards getting their news from social media rather than more trusted and reputable sources. The recent clash between the White House and Facebook on the role that Facebook has played in enhancing vaccine hesitancy and the US falling behind in meeting its vaccination goals is a demonstration of how collapsing the issue into easy to reason with binaries makes it a tough problem to solve. In particular, the article points out how some are advocating for dispensing this political capital in a better fashion to perhaps spark vaccine mandates by offices and schools and to treat misinformation’s role in this as a broader issue that needs more long-term solutions.
Why it matters: As is the case with any socio-technical issue, there are a range of variables that impact the inputs and outputs of a problem. What we see here is that a lack of transparency on the parts of Facebook and YouTube for example limiting the ability of researchers to gather adequate data about the degrees of correlation between vaccine hesitancy and the content that they see online along with the efficacy of the measures undertaken by these platforms to limit the spread of misinformation, specifically here, related to vaccines.
Between the lines: The recent denial of access to researchers studying the Facebook platform is yet another blow that will only deepen the chasm between positive public health outcomes and the potential role that a company like Facebook can and is playing in that. The Klobuchar bill mentioned in the article serving as a messaging bill is a first step in establishing some baselines on how to tackle the issue but it raises even more questions and issues than it tries to solve: namely, transferring over the arbitration of what is and is not misinformation from the platform and their community of moderators to the government, which would certainly raise concerns around the violation of the First Amendment.
From our Living Dictionary:
‘Automation bias’
👇 Learn more about why it matters in AI Ethics via our Living Dictionary.
From elsewhere on the web:
A roadmap to more sustainable AI systems
AI has a sizeable carbon footprint, both during training and deployment phases. How do we build AI systems that are greener? The first thing we need to understand is how to account and calculate the carbon impact of all the resources that go into the AI lifecycle. So what is the current state of carbon accounting in AI? How effective has it been? And can we do better? This conversation will answer these questions and dive into what the future of carbon accounting in AI looks like and what role standards can play in this, especially if we want to utilize actionable insights to trigger meaningful behavior change.
In June 2021, four of us from the Salesforce AI Ethics and Conversational Design teams collaborated with the Montreal AI Ethics Institute (MAIEI) to facilitate a workshop on the responsible creation and implementation of chatbots and conversational assistants. Connor Wright has summarized 10 insights from the workshop and we’d like to go deeper on a few of the themes and questions raised by the conversation.
To delve deeper, read the full article here.
Take Action:
The State of AI Ethics Report (Volume 5) captures the most relevant developments in AI Ethics since the first quarter of 2021. We’ve distilled the research & reporting around 3 key themes:
Creativity and AI
Environment and AI
Geopolitics and AI
We also have our evergreen section Outside the Boxes that captures insights across an eclectic mix of topic areas for those looking for a broad horizon of domains where AI has had a societal impact. And we bring back the community spotlights to showcase meaningful work being done by scholars and activists from around the world.
This edition opens with a section that has been much requested by you, our community, titled “What we’re thinking” that gives insights into emergent trends and gaps that we’ve noticed in existing coverage of AI ethics. We also have a special contribution titled “The Critical Race Quantum Computer: A Tool for Liberation” by Michael Lipset, Jessica Brown, Michael Crawford, Kwaku Aning, & Katlyn Turner with a very intriguing framing of how we think about race and technology.