The AI Ethics Brief #44: Donkey internet, AI managing employees, algorithmic imaginaries, and more...
Do you know of a researcher or practitioner in the community whose work deserves more attention?
Welcome to another edition of the Montreal AI Ethics Institute’s weekly AI Ethics Brief that will help you keep up with the fast-changing world of AI Ethics! Every week we summarize the best of AI Ethics research and reporting, along with some commentary. More about us at montrealethics.ai/about.
Support our work through Substack
💖 To keep our content free for everyone, we ask those who can, to support us: become a paying subscriber for as little as $5/month. If you’d prefer to make a one-time donation, visit our donation page. We use this Wikipedia-style tipping model to support the democratization of AI ethics literacy and to ensure we can continue to serve our community.
*NOTE: When you hit the subscribe button, you may end up on the Substack page where if you’re not already signed into Substack, you’ll be asked to enter your email address again. Please do so and you’ll be directed to the page where you can purchase the subscription.
This week’s overview:
What we are thinking:
From the Founder’s Desk: Introduction to ethics in the use of AI in war: Part 3
Opinion: The Artificiality of AI – Why are We Letting Machines Manage Employees?
The Sociology of AI Ethics: The algorithmic imaginary: exploring the ordinary affects of Facebook algorithms (Research Summary)
Research summaries:
Algorithmic content moderation: Technical and political challenges in the automation of platform governance
Event summaries:
Can we engineer ethical AI?
Article summaries:
Where the internet was delivered by a donkey (Rest of World)
AI needs to face up to its invisible-worker problem (MIT Tech Review)
Fighting AI bias needs to be a key part of Biden’s civil rights agenda (Fast Company)
How AI could spot your weaknesses and influence your choices (The Next Web)
But first, our call-to-action this week:
Nominate underrecognized people in AI ethics to be featured in our next report!
We are inviting the AI ethics community to nominate researchers, practitioners, advocates, and community members in the domain of AI ethics to be featured in our upcoming State of AI Ethics report.
There is often great work being done in different parts of the world that does not get the attention it deserves due to the state of our information ecosystem and the manner in which platforms surface content. We would like to break that mold and shed some light on the valuable work being done by talented people.
✍️ What we’re thinking:
From the Founder’s Desk:
Introduction to ethics in the use of AI in war: Part 3 by Abhishek Gupta
Building on Part 1 of the article and Part 2 of the article, let's dive into some more ideas when it comes to the discussion of ethics in the use of AI in war.
In Part 1, I had covered:
quick Basics that talked about autonomous weapons systems, semi-autonomy, full autonomy, lethal use, and non-lethal use
the potential advantages and costs
In Part 2, I had covered:
Current limitations of ethics principles
Key issues - Part 1
If you haven't had a chance to read the first part and second part yet, I strongly encourage you to do so as we will build on the definitions that were explained in those parts to discuss the issues in this one.
Let's dive into:
Key issues - Part 2
Open Questions
To delve deeper, read the full article here.
Opinion:
The Artificiality of AI – Why are We Letting Machines Manage Employees? by Alexandrine Royer
Algorithms already heavily mediate several aspects of our daily lives, from where we decide to eat, how we get from point A to B, what news we see, and how we organize our day. As Peter Sondergaard, senior vice president at Gartner, observed “Amazon’s algorithm keeps you buying. Netflix keeps you watching. And newer algorithmic applications like Waze keep you moving… I now have so many smart devices, that the only thing that is not smart, is me.”
Despite the enthusiasm for data-driven business development and management, we must not obscure the fact that algorithmic systems are human-made. Why then are we blindly trusting algorithmic management systems to be in charge?
To delve deeper, read the full article here.
The Sociology of AI Ethics:
The algorithmic imaginary: exploring the ordinary affects of Facebook algorithms (Research Summary)
Bucher explores the spaces where humans and algorithms meet. Using Facebook as a case study, she examines the platform users’ thoughts and feelings about how the Facebook algorithm impacts them in their daily lives. She concludes that, despite not knowing exactly how the algorithm works, users imagine how it works. The algorithm, even if indirectly, not only produces emotions (often negative) but also alters online behaviour, thus exerting social power back onto the algorithm in a human-algorithm interaction feedback loop.
To delve deeper, read the full summary here.
🔬 Research summaries:
Algorithmic content moderation: Technical and political challenges in the automation of platform governance by Robert Gorwa, Reuben Binns, and Christian Katzenbach
The paper provides a comprehensive overview of the existing content moderation practices and some of the basic terminology associated with this domain. It also goes into detail on the pros and cons of different approaches and the difficulties that continue to be present in the field despite the introduction of automated content moderation. Finally, it shares some of the future directions that are worthy of our attention to come up with even more effective content moderation approaches.
To delve deeper, read the full summary here.
Event summaries:
This event recap was co-written by Connor Wright (our Partnerships Manager), Alexandrine Royer (our Educational Program Manager), and Muriam Fancy (our Network Engagement Manager), who co-hosted our “Can We Engineer Ethical AI” virtual meetup in partnership with NEOACM earlier in February.
To delve deeper, read the full summary here.
📰 Article summaries:
Where the internet was delivered by a donkey (Rest of World)
Talking about an innovation called the Ilimbox, this article dives into the details of how a sliver of the internet is downloaded into a small, tissue-box-sized device and lugged to some of the most remote parts of the world where internet access is a challenge. The device is able to store articles from Wikipedia and educational content from YouTube and then physically transported atop a donkey by an organization called the Internet Society to remote villages in Kyrgyzstan that don’t have access to the internet or electricity. The country is particularly challenging from a geographic standpoint because of its mountainous terrain.
COVID-19 made this need to reach remote parts of the world even more acute as traditional education faced headwinds. What caught my attention here is that the volunteers running this effort chose to download content in the local language in addition to English and Russian to make it more accessible to the students in those places. But, it only constitutes a small amount of content because of the lack of translation of many articles on websites like Wikipedia. Hopefully, as NLP progresses even more, we’d be able to open up even more content in local languages to accelerate educational efforts, harnessing AI in a positive way and overcoming the dominance of a single-language internet.
AI needs to face up to its invisible-worker problem (MIT Tech Review)
Articulating the problem of a lack of recognition and fair compensation offered to gig workers who are behind the miracles of modern-day AI, the article provides some insights into the pervasiveness of this kind of work and how many people depend on it for their livelihood. Supervised machine learning approaches require large amounts of labelled data and often that is sourced from platforms like Amazon Mechanical Turk where workers toil for abysmal wages (~$2/hour) smoothing out the rough edges of AI systems so that we don’t see them fail. Yet, these workers don’t receive many protections that standard workforce participants would get.
What is appalling is that some of the richest AI companies are the ones who contract these workers without paying them adequately. There is a difference in terms of the amount of effort required to complete tasks and the wages they receive. The researcher interviewed in the article is offering tools and developing awareness to help these workers raise their concerns and at the same time empower them to better understand what they are signing up for. Finally, something that needs a lot of emphasis is how such work doesn’t offer skills that can be utilized elsewhere and are often a roadblock to the workers moving on to more meaningful work that can lead them to better lives.
Fighting AI bias needs to be a key part of Biden’s civil rights agenda (Fast Company)
With a change of political landscape in the US, the Algorithmic Accountability Act which was first brought forth by Senators Wyden and Booker might be the first legislation that can enshrine some protections for AI ethics issues in law in the US. While it isn’t without its shortcomings, for example, the lack of transparency requirements for algorithmic audits, the proposed bill still represents a great first step at the federal level to create regulation. Current legislations that penalize discrimination in hiring for example are still weak in terms of actual teeth for regulating algorithmic practices in hiring. One of the strongest calls to action at the moment for such emergent regulation is for it to regulate the appropriate thing and have sufficient teeth to hold companies accountable when they run afoul.
Agencies like the EEOC and FTC in the US are well-positioned to take some of the tools that would emerge from such a regulation and put them into practice against companies that violate these norms. With the Biden administration’s emphasis on science and technology, for example, the elevation of the OSTP office to a cabinet position and appointment of Alondra Nelson as the science and society officer, the atmosphere seems ripe to push through such an Act to lay the groundwork for future work in this domain.
How AI could spot your weaknesses and influence your choices (The Next Web)
Leveraging experiments conducted by Data61 from Australia, the article points to how AI systems can detect patterns in human behaviour and steer them in a direction that can be used to maximize the achievement of its own goals. Through simulated games where the AI acts as a trustee receiving money from a human participant and providing patterns of shapes where the human has to click when presented with something, the system was able to find patterns that caused the humans to make mistakes more frequently.
Dark design patterns do the same thing from a user interface and user experience standpoint when you have auto-scrolling and auto-play of videos that compel you to continue spending time on platforms beyond what might be in your own interest. The use of AI might just accelerate the use of such approaches to achieve goals that are in the interest of those who are building these systems as opposed to those of the users. It strengthens the case for higher levels of transparency and accountability in the design, development, and deployment of these systems.
From our Living Dictionary:
Definition of ‘Ethical Debt’
Ethical debt refers to the design, development, deployment and use of an AI system by an agent or corporation without adequately considering the ethical issues surrounding said system. In this sense, as each decision is made within this process, the ethical considerations that are not taken start collecting themselves as "debt" to be "paid back" by someone other person, group or entity once the system has been deployed.
👇 Learn more about the relevance of ‘ethical debt’ and more in our Living dictionary.
Explore the Living Dictionary!
From elsewhere on the web:
Seis formas de aprendizaje automático que amenazan la justicia social (Collateral Bits)
A translated (into Spanish) version of an article found on our blog “6 Ways Machine Learning Threatens Social Justice”.
How can human-centered AI fight bias in machines and people? (MIT Sloan)
With human-centered AI, algorithms and humans can work together to compensate for blind spots and create clearer outcomes.
The challenges, successes, progression & failures of processing in AI (REWORK)
This White Paper explores the challenges, limitations and future of data use, discovery and availability. Chapters include: Processing limitations on Enterprise AI - Is GPT-3 the ultimate solution, Data Roadblocks in ML & AI, Data Limitations in Common Industry and Non-Profit Applications and more.
Montreal, Centre of the A.I. World (#4 The City of Ethics podcast)
Our founder Abhishek Gupta was featured in this podcast discussing why AI ethics are a unique part of Montreal's DNA.
Guest post:
Introduction To Ethical AI Principles by Cleber Ikeda
In this article, I will explore the following ethical AI principles and how they can be put into action:
Fairness
Accountability
Human agency
Transparency
Privacy
Respecting human rights
To delve deeper, read the full summary here.
If you’ve got an informed opinion on the impact of AI on society, consider writing a guest post for our community — just send your pitch to support@montrealethics.ai. You can pitch us an idea before you write, or a completed draft.
In case you missed it:
Beyond a Human Rights Based Approach To AI Governance: Promise, Pitfalls and Plea by Nathalie A. Smuha
This paper by Nathalie A. Smuha explores how human rights can form a solid foundation for the development of AI governance frameworks but it cautions against an over-reliance on them for making decisions on how to structure the framework and decide on its actual components. The author highlights how the EC Trustworthy AI guidelines successfully utilized a human rights foundation to advocate for building legal, ethical and robust AI systems. While moral objectivism might seem like a great idea for creating a universal framework, there remains value in looking at it from a relativistic perspective where nuances of culture and context of different places can be adequately represented such that the proposed framework is more in line with the expectations of the people living in that jurisdiction.
Arguments against using human rights are centered on them being too Western, individualistic and abstract but the author provides adequate justification for how those are weak arguments and that in fact the most often cited problem with human rights being abstract is a boon in that they can be applied to novel circumstances as well without much modification though they are subject to interpretation. With sufficient exercise of those principles, they often get enshrined in law as rules, which though can be inflexible, still offer concrete guidance that can serve as constituent parts to an AI governance framework. The paper also posits that it will be important for people in both law and technology to know the specificities of each other’s domains better to build frameworks that are meaningful and practical.
To delve deeper, read the full summary here.
Take Action:
Nominate underrecognized people in AI ethics to be featured in our next report!
We are inviting the AI ethics community to nominate researchers, practitioners, advocates, and community members in the domain of AI ethics to be featured in our upcoming State of AI Ethics report.
There is often great work being done in different parts of the world that does not get the attention it deserves due to the state of our information ecosystem and the manner in which platforms surface content. We would like to break that mold and shed some light on the valuable work being done by talented people.
Events:
The state of AI ethics in Spain and Canada (El estado de la ética IA en España y Canadá)
We’re partnering with OdiseIA to discuss the state of AI ethics in Canada and in Spain. The discussion will span topics including country-specific regulations, commonalities across both countries, and the type of federal policies that will be needed to move the needle.
📅 February 26th (Friday)
🕛12 PM - 1:30 PM EST
🎫 Get tickets