AI Ethics Brief #87: AI inequity and climate change, social dilemma in AI development, the role of standards in the EU AI regulation, and more ...
What are some issues in AI ethics in the process of recruitment?
Welcome to another edition of the Montreal AI Ethics Institute’s weekly AI Ethics Brief that will help you keep up with the fast-changing world of AI Ethics! Every week we summarize the best of AI Ethics research and reporting, along with some commentary. More about us at montrealethics.ai/about.
⏰ This week’s Brief is a ~19-minute read.
Support our work through Substack
💖 To keep our content free for everyone, we ask those who can, to support us: become a paying subscriber for as little as $5/month. If you’d prefer to make a one-time donation, visit our donation page. We use this Wikipedia-style tipping model to support the democratization of AI ethics literacy and to ensure we can continue to serve our community.
*NOTE: When you hit the subscribe button, you may end up on the Substack page where if you’re not already signed into Substack, you’ll be asked to enter your email address again. Please do so and you’ll be directed to the page where you can purchase the subscription.
This week’s overview:
🔬 Research summaries:
Common but Different Futures: AI Inequity and Climate Change
The social dilemma in artificial intelligence development and why we have to solve it
Harmonizing Artificial Intelligence: The role of standards in the EU AI Regulation
Analysis and Issues of Artificial Intelligence Ethics in the Process of Recruitment
📰 Article summaries:
The true cost of Amazon’s low prices
The Humanities Can't Save Big Tech From Itself
Climate change: Small army of volunteers keeping deniers off Wikipedia
📖 Living Dictionary:
Proxy Variables
🌐 From elsewhere on the web:
Seven AI ethics experts predict 2022’s opportunities and challenges for the field
💡 ICYMI
The algorithmic imaginary: exploring the ordinary affects of Facebook algorithms
But first, our call-to-action this week:
Keeping in line with our mission to democratize AI ethics literacy, we want to create new opportunities to feature writing from our community members!
If you would like to have your work featured on our website and included in this newsletter to reach thousands of technical and policy leaders in AI ethics, reach out to us!
We are currently looking for research summary writers and will open up more writing opportunities in the coming months for regular contributors.
🔬 Research summaries:
Common but Different Futures: AI Inequity and Climate Change
AI, experts say, can help “solve” climate change. At the same time, the carbon footprint of emerging technologies like AI is increasingly under scrutiny, especially due to pressure from climate-conscious shareholders and consumers. The “Global South” faces a dual challenge: first, the social and economic benefits of AI are accruing to a privileged few countries, and second, most of the efforts and narratives on AI and climate impact are being driven by the developed West, meaning that by not engaging with these debates early on, they risk being locked into rules and terms set by a small group of powerful actors. This paper proposes, among other recommendations, a revival of the CBDR principle for the AI and climate context.
To delve deeper, read the full summary here.
The social dilemma in artificial intelligence development and why we have to solve it
In spite of the fact that there is no dearth of Artificial Intelligence (AI) ethics guidelines, the number of unethical use cases of AI has proliferated over the years. This paper argues that a possible underlying cause for this is that AI developers face a social dilemma in incorporating ethics into the AI systems developed by them. It then goes on to define what social dilemma is, and describes why the current crisis in ethical development of AI cannot be solved without relieving AI developers of their social dilemma.
To delve deeper, read the full summary here.
Harmonizing Artificial Intelligence: The role of standards in the EU AI Regulation
The EU’s AI Act has envisioned a strong role for technical standards in the governance of AI. Little research has been conducted on what this role looks like, however. This paper by Oxford Information Labs provides an in-depth analysis of the world of technical standards and gives a high-level overview of the EU AI Act’s expected reliance on standards, as well as their perceived strengths and weaknesses in the governance of AI.
To delve deeper, read the full summary here.
Analysis and Issues of Artificial Intelligence Ethics in the Process of Recruitment
AI is starting to become a household name in the hiring process for many businesses. While its involvement in the process varies from case to case, the attention required to tackle the problem of bias does not.
To delve deeper, read the full summary here.
A message from our sponsor this week:
All datasets are biased.
This reduces your AI model's accuracy and includes legal risks.
Fairgen's platform solves it all by augmenting your dataset, rebalancing its classes and removing its discriminative patterns.
Increase your revenues by improving model performance, avoid regulatory fines and become a pro-fairness company.
Want to be featured in next week's edition of the AI Ethics Brief? Reach out to masa@montrealethics.ai to learn about the available options!
📰 Article summaries:
The true cost of Amazon’s low prices
What happened: Set amidst the backdrop of the American Innovation and Choice Online Act (a US Senate bill), the article examines what impacts the low prices that we all love has on suppliers of goods and the anti-competitive practices that Amazon engages in to make them possible. This includes things like giving preferential treatment to its own manufactured goods (based on non-public seller data) and broader analysis of such behaviour in other fields that Amazon is involved in such as AWS which now has renewed scrutiny from the FTC. They are known to use dominance in one field to expand into others and currently have plays in healthcare, logistics, entertainment, and advertising among others. On the shopping platform, Amazon coerces sellers through “fair pricing policies” in the interest of “serving their customers” to not price their goods significantly higher on Amazon than on any other platform.
Why it matters: The impact of the pricing policy on sellers is that they can either lose profitability or take their business elsewhere, a costly choice when they might lose access to a very large swathe of buyers who almost exclusively spend their money on purchases through Amazon. This has an impact on the rest of the ecosystem as well since everyday consumers will be unwilling to buy from elsewhere if they see that prices are significantly lower on Amazon than elsewhere for the same product, even when the seller might have to take a loss to continue being listed on their service. Other things like referral fees upto 15% in addition to pushing optional services like Fulfillment by Amazon (FBA) can eat into the margins as well. Not taking on FBA means that sellers aren’t Prime-eligible, meaning they will lose out on a lot of sales from the power-users of Amazon who drive a lot of purchases.
Between the lines: There are other bills like the House’s Ending Platform Monopolies Act (without a Senate equivalent at the moment) that will bring more expansive regulation and scrutiny on to the market dynamics at play with really huge firms like Amazon. Given that they have total control over the platform, it is quite easy to engage in anti-competitive behaviour like preferential placement of their own products, “free” advertising, bundling into “Buy Boxes”, and copying successful products through non-public seller data. And this is just the beginning: it has single-handedly reshaped broader expectations of consumers in terms of pricing and convenience which makes it hard for anyone without their scale (and there are none!) to compete effectively and draw consumer dollars.
The Humanities Can't Save Big Tech From Itself
What happened: The article offers an insightful peek into the dynamics at play within Big Tech today and the associated AI ethics ecosystem around it that has trumpeted the call to the humanities to help it solve thorny issues of the societal impacts of technologies. There are intra-domain issues and varying motives and histories within these fields that are problems of their own. For example, the infamous PredPol system was founded by an anthropologist from UCLA. Research from these fields is also repurposed at times to serve other needs, such as work from SAFELab that received funding from DARPA was initially meant to automate gang aggression detection online but was then expanded to work on investigating ISIS recruitment online. Researchers and activists from marginalized communities are called upon to help “fix” issues, but their suggestions are often summarily ignored. The article makes the case for offering greater public funding and meaningful support to these individuals and organizations so that they may actually achieve these goals. Lastly, diversity in these domains can itself be dismal as the author recounts that their experience of diversity has sometimes been better in tech than outside it in these fields.
Why it matters: There is a tremendous risk today that Big Tech slips into complacency through surface-level metrics in terms of diversity in hiring, both in discipline and other markers. Meaning that these indicators, while useful, don't paint a complete picture in terms of actual changes that are being worked on and whether they are helping to meaningfully move towards the goals that we’ve set out to achieve in mitigating the harmful impacts from the deployment of these technologies.
Between the lines: An extremely incisive analysis that unearths the dynamics at play within the current ecosystem and the subtle and slow-burning implications that it will have on how we actually address the societal impacts of technology. The key takeaway being that we must ensure that the people who are being called on to help with issues in Big Tech are actually empowered to enact the changes for which they have been called upon, and not placing undue burden just on their shoulders alone to make those changes. What we need is a systematic overhaul that will move us all to a place where we create conditions that make these changes possible rather than constantly having to struggle against them.
Climate change: Small army of volunteers keeping deniers off Wikipedia
What happened: Wikipedia is a prominent source of information, for example, it is shown as one of the top results in performing Google (or your other favourite search engine) queries. This has a tremendous impact for those who seek information, especially in politically charged areas of discourse like climate change. The article details the efforts of small groups of volunteers distributed across the world who spend countless, unpaid hours making sure that the information on pages that talk about this subject remain accurate. But, such efforts are not without struggle and ire that these editors and page maintainers face from those who try to spread disinformation through these prominent pages.
Why it matters: Since these pages tend to be referenced in making arguments, keeping them scientifically and empirically grounded helps to at least dissuade some of the efficacy of the disinformation campaigns that are waged online. They also potentially serve as foundational knowledge for those who are looking to enter a space when they are doing unguided research online, meaning that these pages can play a pivotal role in shaping core ideas and beliefs in a subject area.
Between the lines: Disinformation analysis is usually heavily focused on how it spreads on social media platforms, but there is also another arena where the war for the health of our information ecosystem is taking place. The fact that such a massive initiative runs on so few resources and the goodness of volunteers who share their time in keeping these pages up-to-date is a testament to the good that internet technologies done well can bring. By no means is that the best model though, the harrowing experience and burnout that these folks, especially remaining unpaid calls for perhaps treating these properties on the internet as public utilities that may deserve some public funding, akin to the most critical open-source software propping up the internet infrastructure.
📖 From our Living Dictionary:
“Proxy Variables”
👇 Learn more about why it matters in AI Ethics via our Living Dictionary.
🌐 From elsewhere on the web:
Seven AI ethics experts predict 2022’s opportunities and challenges for the field
From developing more human-centric AI to overcoming fragmented approaches to ethical AI development, here’s what experts told us.
Our founder, Abhishek Gupta, shares his view that the biggest advancement will be a formalization of bias audits and the biggest challenge will be reconciling different AI regulations across the world.
💡 In case you missed it:
The algorithmic imaginary: exploring the ordinary affects of Facebook algorithms
Bucher explores the spaces where humans and algorithms meet. Using Facebook as a case study, she examines the platform users’ thoughts and feelings about how the Facebook algorithm impacts them in their daily lives. She concludes that, despite not knowing exactly how the algorithm works, users imagine how it works. The algorithm, even if indirectly, not only produces emotions (often negative) but also alters online behaviour, thus exerting social power back onto the algorithm in a human-algorithm interaction feedback loop.
To delve deeper, read the full summary here.
Take Action:
Keeping in line with our mission to democratize AI ethics literacy, we want to create new opportunities to feature writing from our community members!
If you would like to have your work featured on our website and included in this newsletter to reach thousands of technical and policy leaders in AI ethics, reach out to us!
We are currently looking for research summary writers and will open up more writing opportunities in the coming months for regular contributors.