The AI Ethics Brief #50: AI Junkyard, algorithmic pricing, code work, policing free speech, and more ...
What impacts do Algorithmic Impact Assessments actually have?
Welcome to another edition of the Montreal AI Ethics Institute’s weekly AI Ethics Brief that will help you keep up with the fast-changing world of AI Ethics! Every week we summarize the best of AI Ethics research and reporting, along with some commentary. More about us at montrealethics.ai/about.
⏰ This week’s Brief is a ~12-minute read.
Support our work through Substack
💖 To keep our content free for everyone, we ask those who can, to support us: become a paying subscriber for as little as $5/month. If you’d prefer to make a one-time donation, visit our donation page. We use this Wikipedia-style tipping model to support the democratization of AI ethics literacy and to ensure we can continue to serve our community.
*NOTE: When you hit the subscribe button, you may end up on the Substack page where if you’re not already signed into Substack, you’ll be asked to enter your email address again. Please do so and you’ll be directed to the page where you can purchase the subscription.
This week’s overview:
What we’re thinking:
From the Founder’s Desk: Tradeoff determination for ethics, safety, and inclusivity in AI systems
Sociology of AI Ethics: Algorithmic Impact Assessments: What impact do they have?
Anthropology of AI Ethics: The AI Junkyard: Thinking Through the Lifecycle of AI Systems
Research summaries:
Code Work: Thinking with the System in Mexico
Mapping the ethicality of algorithmic pricing
Article summaries:
New Algorithms Could Reduce Racial Disparities in Health Care (Wired)
Why Is Big Tech Policing Speech? Because the Government Isn’t (NY Times)
Hundreds of sewage leaks detected thanks to AI (BBC)
2021 Emerging Trends: How Artificial Intelligence Will Affect Creative Decisions (Forbes)
But first, our call-to-action this week:
We’re partnering with Women in AI and the University of New South Wales (UNSW) in Australia to host a discussion about democratising AI, disinformation, content moderation, AI in the APAC region and the region's responsibility in the AI debate.
📅 April 14th (Wednesday)
🕛 4:30 PM – 6:00 PM EST
🎫 Get free tickets
✍️ What we’re thinking:
From the Founder’s Desk:
Tradeoff determination for ethics, safety, and inclusivity in AI systems by Abhishek Gupta
" ... it is not just sufficient to identify tradeoff determinations. Yes, acknowledging that there is a problem is always the first step but we need to move beyond. The way to do that is to associate actionable remediation measures with each of the tradeoffs that you list. This helps the stakeholders break inertia and meaningfully act on the recommendations to improve system outcomes."
To delve deeper, read the full article here.
Sociology of AI Ethics:
Algorithmic Impact Assessments – What Impact Do They Have?
Algorithmic Impact Assessments (AIAs) are a useful tool to help AI system designers, developers and procurers to analyze the benefits and potential pitfalls of algorithmic systems. To be effective in addressing issues of transparency, fairness, and accountability, the authors of this article argue that the impacts identified in AIAs need to as closely represent harms as possible. And secondly, that there are accountability forums that can compel algorithm developers to make appropriate changes to AI systems in accordance with AIA findings.
To delve deeper, read the full summary here.
Anthropology of AI Ethics:
The AI Junkyard: Thinking Through the Lifecycle of AI Systems
Streaming services also do not operate in the same way in a given environment as, say, a ceiling fan or a washing machine. For a more precise understanding of the environmental and social impacts of streaming, calculations ought to include the energy costs of charging our laptops to keep streaming, securing access to high-speed Internet, upgrading devices and discarding old hardware, and so on.
To delve deeper, read the full summary here.
🔬 Research summaries:
Code Work: Thinking with the System in Mexico
Hackathons are now part of a global phenomenon, where talented youth participate in a gruelling technical innovation marathon towards the world’s most pressing problems. In his ethnographic study of hackathons in Mexico, Hector Beltran illustrates how code work offers youth a set of technical tools for social mobility and a way of thinking and working within the larger political-economic system that determines their social position.
To delve deeper, read the full summary here.
Mapping the Ethicality of Algorithmic Pricing
Pricing algorithms can predict an individual’s willingness to buy and adjust the price in real-time to maximize overall revenue. Both dynamic pricing (based on market factors like supply and demand), and personalized pricing (based on individual behaviour) pose significant ethical challenges, especially around consumer privacy.
To delve deeper, read the full summary here.
📰 Article summaries:
New Algorithms Could Reduce Racial Disparities in Health Care (Wired)
What happened: An AI system used in analyzing knee-radiology images when trained by using the patient feedback as the ground truth label compared to the doctor’s determination was found to lead to better outcomes for patients. It was able to uncover blindspots that the doctors had in reading those images because of the historical bias in how the assessment is done and which factors are used in making that assessment.
Why it matters: Healthcare automation is seen mostly as a source of bias today — not as a tool that can help improve patient outcomes. So, flipping the script in terms of how the systems are trained in the first place leading to better outcomes for Black people who were being under-diagnosed and not treated well, is a great step forward.
Between the lines: As we start to see more scrutiny on the way AI systems are deployed in the real world and we unearth more failure scenarios, I suspect that innovative ways of addressing biases and thinking outside the boxes will lead to more successful outcomes than just relying on purely technical measures.
Why Is Big Tech Policing Speech? Because the Government Isn’t (NY Times)
What happened: A reflection on the incidents from the beginning of 2021 when #stormthecapital led to real-world harm that originated through organized efforts on the Twitter-alternative Parler. The subsequent actions by Google and Apple to ban the app from the app store and further by Amazon in denying their cloud services to Parler stopped the social media site in its tracks. It was a coup by the tech giants in stopping harm and violence from spreading further during a tumultuous time in US history.
Why it matters: While the action makes sense in the short run, such “de-platforming” has raised deeper questions about who should be the arbiter of such decisions. When Trump was removed from Twitter, Merkel from Germany found it to be potentially setting the wrong precedent because such actions should stem from laws (as would be the case in Germany) whereas in this case it was shaped by the political climate (though both Facebook and Twitter deny that it had anything to do with that).
Between the lines: There is discomfort in having the government regulate speech online, as is the case with large corporations doing so. There is also discomfort when nothing is done at all making this a particularly hard challenge to solve. Balancing the public good against freedom of speech needs to be done in a way that services democracy, not just to stand in isolation.
Hundreds of sewage leaks detected thanks to AI (BBC)
What happened: Highlighting an innovative use of AI, the article points to research that has detected spillage of sewage from treatment plants into nearby water bodies. The system was trained on data that captured how water flowed in plants that had normal functioning and one that was erratic. Through this process, the system was able to detect places where leaks were taking place which had passed by undetected before.
Why it matters: When we look at the resources that are required to enforce environmental adherence policies, government agencies often find themselves understaffed and having to cut corners to do best with the resources that they have. Automating detection such that leaks can be detected and human resources can be better directed towards addressing problematic cases can help us create stronger adherence to these policies.
Between the lines: I am particularly excited to see how such systems generalize beyond the use case in the UK and if there are adaptations required to work in countries where the treatment plants are different from the ones in the UK. There is immense potential for developing nations that face egregious violations of environmental policies to do better monitoring and enforcement of their regulations through the use of AI.
2021 Emerging Trends: How Artificial Intelligence Will Affect Creative Decisions (Forbes)
What happened: Infusion of AI capabilities into software like Luminar, the Adobe Creative Suite, among others has led to a flurry of discussions in terms of what it means for artists and the work that they do. The trend has been that the creative industry is a lot more data-driven compared to before, there are lower barriers to creating more high-quality content, content creation itself has been commoditized, and the emphasis will be on more unique creations given the commoditization of common skills.
Why it matters: The trends identified here are important because they indicate the complementarity in the use of AI alongside human creatives. This bucks the trend compared to conversations that center on alarmist portrayals of “robots are going to take our jobs” and present more realistic scenarios of what might actually come to pass.
Between the lines: These trends are important to correctly identify so that the skills that are being imparted in the education system align correctly with the actual capabilities and limitations of technology. Humans will continue to play an important role in the creative ecosystem, but the bundle of tasks within their jobs are going to be transformed through the introduction of AI into various tools.
From our Living Dictionary:
‘Algorithmic pricing’
Algorithmic pricing is the practice of automatically altering the listed price of a good or service as a function of available data.
👇 Learn more about why it matters in AI ethics through our Living dictionary.
Explore the Living Dictionary!
From elsewhere on the web:
AI Ethics AMA with our founder Abhishek Gupta
The purpose of this space is to host community insights that would typically happen in friendly exchanges that yield meaningful solutions to some of the toughest challenges that we face in the field today. The format of the session will be a 25-minutes conversation on Saturday noon ET.
The goal of this event is to share the evolution of research ideas through specific examples of negative results, retrospectives, and project post-mortems.
Guest post:
Artificial Intelligence and Healthcare: From Sci-Fi to Reality by Marcel Hedman
This past year has been filled with huge disruption due to the onset of COVID-19. However, this has not detracted from major leaps in the world of artificial intelligence. These leaps include algorithms to determine the 3D structure of proteins like DeepMind’s AlphaFold 2 algorithm or huge systems that can generate original text or even code! Quickly, we are entering into an age where the concepts once reserved for Sci-fi movies are now seeming slightly more possible. While it is very clear that we are far from realising all-knowing AI systems (AKA General AI), this should not detract from the scale of the technology being developed. The advancements within systems that can perform a single task extremely well have been great to see.
To delve deeper, read the full article here.
If you’ve got an informed opinion on the impact of AI on society, consider writing a guest post for our community — just send your pitch to support@montrealethics.ai. You can pitch us an idea before you write, or a completed draft.
Take Action:
Events:
We’re partnering with Women in AI and the University of New South Wales (UNSW) in Australia to host a discussion about democratising AI, disinformation, content moderation, AI in the APAC region and the region's responsibility in the AI debate.
📅 April 14th (Wednesday)
🕛 4:30 PM – 6:00 PM EST
🎫 Get free tickets
Self-disclosure for an AI product: A practical workshop and feedback session
We’re partnering with Open Ethics to host a discussion about self-disclosure for participants to learn about, and for companies to demonstrate their AI product self-disclosure process. This includes looking at how an AI product was built and the downstream effects that may have on people using it.
📅 May 13th (Thursday)
🕛 1:00 PM – 2:30 PM EST
🎫 Get free tickets
The AI Ethics Learning Community
We're halfway through our 8-week cohort-based learning community. Come watch & hang out at the next one — we meet on Wednesdays at 5 PM EST.
📅 April 7th (Wednesday)
🕛 5:00 PM – 6:30 PM EST
🎫 Get free tickets