The AI Ethics Brief #51: Teaching tech ethics, disaster city digital twin, creativity in the AI era, and more ...
How can AI help companies set prices more ethically?
Welcome to another edition of the Montreal AI Ethics Institute’s weekly AI Ethics Brief that will help you keep up with the fast-changing world of AI Ethics! Every week we summarize the best of AI Ethics research and reporting, along with some commentary. More about us at montrealethics.ai/about.
⏰ This week’s Brief is a ~12-minute read.
Support our work through Substack
💖 To keep our content free for everyone, we ask those who can, to support us: become a paying subscriber for as little as $5/month. If you’d prefer to make a one-time donation, visit our donation page. We use this Wikipedia-style tipping model to support the democratization of AI ethics literacy and to ensure we can continue to serve our community.
*NOTE: When you hit the subscribe button, you may end up on the Substack page where if you’re not already signed into Substack, you’ll be asked to enter your email address again. Please do so and you’ll be directed to the page where you can purchase the subscription.
This week’s overview:
✍️ What we’re thinking:
Office Hours: We interviewed 3 experts who teach Tech Ethics. Here’s what we learned. by Marianna Ganapini
🔬 Research summaries:
Creativity in the Era of Artificial Intelligence
Disaster City Digital Twin: A Vision for Integrating Artificial and Human Intelligence for Disaster Management
AI Ethics in the Public, Private, and NGO Sectors: A Review of a Global Document Collection
📅 Event summaries:
The Abuse and Misogynoir Playbook, explained
📰 Article summaries:
The Little-Known Data Broker Industry Is Spending Big Bucks Lobbying Congress (The Markup)
How AI Can Help Companies Set Prices More Ethically (Harvard Business Review)
China Poised To Dominate The Artificial Intelligence (AI) Market (Forbes)
We finally know how bad for the environment your Netflix habit is (Wired)
But first, our call-to-action this week:
Help us improve this newsletter — give us feedback!
We just want to know 4 things:
Why do you read this newsletter?
What do you like most about it?
What would you like more of?
What should we do less of, or stop doing?
✍️ What we’re thinking:
Office Hours:
We interviewed 3 experts who teach Tech Ethics. Here’s what we learned. by Marianna Ganapini
Last time we asked you about best practices in teaching Tech Ethics. Now we bring to you the ideas, experiences and suggestions of 3 thought leaders with a long track record in developing Tech Ethics curricula:
Karina Alexanyan (15 years of experience at the intersection of social science/tech/media/education)
Philip Walsh (Teaches philosophy at Fordham University)
Daniel Castaño (Professor of Law & Founding Director at the Center for Digital Ethics at Universidad Externado de Colombia)
They will tell us about their teaching philosophies, their course content, and why they think teaching Tech Ethics is so important!
To delve deeper, read the full article here.
🔬 Research summaries:
Creativity in the Era of Artificial Intelligence
The current focus on mimicking human capabilities at the intersection of creativity and AI is counterproductive and an underutilization of the potential that AI has to offer. This paper details the various aspects of creativity from a value-creation and process perspective, highlighting where AI might be a useful mechanism to augment rather than replace our abilities.
To delve deeper, read the full summary here.
With the potential for a climate crisis looming ever larger every day, the search for solutions is essential. One such solution could then be found in the construction of a disaster city digital twin, allowing governments to simulate the potential effects of different disasters on physical cities through running simulations on their digital replicas. Doing so can help make more tangible the effects of a climate crisis, whether it be through flooding or unusual weather. AI, as a result, has a key part to play in making this a reality.
To delve deeper, read the full summary here.
AI Ethics in the Public, Private, and NGO Sectors: A Review of a Global Document Collection
While recent studies suggest that a global consensus about AI ethics is emerging, this paper finds meaningful differences across the public, private, and NGO sectors. Our team evaluated 112 AI ethics documents from 25 countries, scoring the presence or absence of 25 ethical concepts, the documents’ level of engagement with law and regulation, and how participatory the authorship process was. Overall, NGO documents reflect more ethical breadth, public sector documents prioritize economic growth and unemployment, and private sector documents emphasize client-facing issues and technical fixes.
To delve deeper, read the full summary here.
📰 Article summaries:
The Little-Known Data Broker Industry Is Spending Big Bucks Lobbying Congress (The Markup)
What happened: The Markup revealed the amount of money spent lobbying by data brokers to rival the money spent lobbying by large tech corporations. Perhaps unsurprisingly, a lot of those lobbying dollars went towards bills that looked at privacy and AI. Many of these data brokers are not names that people are generally aware of but they play an outsized role in how the entire data economy operates.
Why it matters: Higher scrutiny on how such companies operate and the impact that their lobbying efforts have is important for people to know so that our activism and other efforts are well-aligned. Given that these companies are only required to self-declare as data brokers in California and Vermont, there is a dire need to bring to light what they’re doing.
Between the lines: As we start to see more privacy laws enacted in different parts of the world, we need supporting mechanisms to hold companies like data brokers accountable that form the fabric of the entire data economy in the first place. Public awareness and education are going to be key drivers towards creating an ecosystem that is tuned for public welfare rather than private profits.
How AI Can Help Companies Set Prices More Ethically (Harvard Business Review)
What happened: Providing a few concrete examples of how companies can play a role in accessibility to products and services during uncertain times, the article talks about the role that AI is playing in hyper-personalized pricing which can lead to discrimination. It provides a crucial 3-step framework talking about what you are selling and how you are selling it to enable companies to operate in a way that favours socially beneficial outcomes.
Why it matters: During the recent events in Texas with electricity outages and with COVID-19 (think toilet paper and webcams) through 2020 and 2021, it has become clear that dynamic pricing can have horrific downstream impacts when goals are specified only towards profitability which can nudge the system towards preying on people’s vulnerabilities during difficult situations.
Between the lines: We often talk here about how AI is causing harm but perhaps there are ways for it to act as a diagnostic tool in surfacing where inadvertent harms are being inflicted. Acting on those insights to improve the organization’s contribution to society can be a great step in the right direction.
China Poised To Dominate The Artificial Intelligence (AI) Market (Forbes)
What happened: ByteDance — the company behind TikTok, is said to be valued at more than $180bn which firmly puts it at the top of the unicorns in the world. Yet, its renown in the western world is fairly limited. The rise of the Chinese ecosystem of technology including strong AI companies like Baidu, Alibaba, Tencent, and of course ByteDance makes it a strong contender for AI supremacy alongside the US. That said, we should be careful in framing the development of AI as an adversarial undertaking.
Why it matters: 3 key factors driving this are a massive domestic market with an appetite for automation, a clear national strategy backed by resources and willpower to realize that strategy, and strong import & export controls that limit domestic competition allowing these companies to flourish. While blindly emulating these factors is a bad strategy, adopting decisive action at the national level will be key for other countries that are looking to harness the potential of AI for their own economies. The recent NSCAI report pushes in that direction for the US (MAIEI contributed to that report.)
Between the lines: There are several factors like immigration policies and openness to exchanging with academic and industrial labs that will become essential cogs in the success of any national strategy to level up their AI capabilities.
We finally know how bad for the environment your Netflix habit is (Wired)
What happened: Recent focus on the environmental impact of cloud infrastructure and services delivered to us through that have woken up companies like Netflix that are seeking to get a better understanding of their carbon footprint. Using a tool called DIMPACT, they found that streaming an hour of Netflix emits CO2eq comparable to driving a car for a quarter of a mile. The primary contribution of the tool is that it provides much more accurate estimates of the carbon footprint of these services.
Why it matters: This subfield at the moment is rife with misinformation on the actual impact of these services. This research work helps to shed light on what is actually happening which is essential for us to design the right kind of interventions. This doesn’t mean that Netflix doesn’t have a responsibility to optimize how they deliver their services, but it helps us get a better grasp on what needs to be done.
Between the lines: We need to consider the alternatives that people engage in if they are not watching Netflix. Replacing your Netflix habit with walking is different from replacing it with a late-night drive in your SUV. Context and accurate estimates will help us make more informed choices.
From our Living Dictionary:
‘Deepfake’
Deepfakes are synthetic media in which a person in an existing image or video is replaced with someone else's likeness.
👇 Learn the problems they present via our Living Dictionary.
Explore the Living Dictionary!
Event summaries:
The Abuse and Misogynoir Playbook, explained
This is an edited transcript of The State of AI Ethics Panel we hosted on March 24th, where we discussed The Abuse and Misogynoir Playbook (the opening piece of our latest State of AI Ethics Report) and the broader historical significance of the mistreatment of Dr. Timnit Gebru by Google.
To delve deeper, read the full event summary here.
In case you missed it:
Roles for Computing in Social Change
This paper highlights the increasing dissonance between computational and policy approaches to addressing social change. Specifically, it calls out how computational approaches are viewed as an exacerbating element to the social ills in society. But, the authors point out how computing might be utilized to focus and direct policymaking to better address social challenges. Specifically, they point to the use of technology as a medium of formalization of social challenges. This methodology brings forth the benefits of making explicit the inputs, outputs, and rules of the system which can create opportunities for intervention. It also has the benefit of translating high-level advocacy work into more concrete, on-the-ground action.
Computational approaches can also serve as a method for rebuttal: empowering stakeholders to question and contest design and development choices. They present an opportunity to shed new light on existing social issues, thus attracting more resources to redressal mechanisms. From a practitioner’s standpoint, computational approaches provide diagnostic abilities that are useful in producing metrics and outputs that showcase the extent of social problems. While such computational methods don’t absolve practitioners of their responsibilities, it provides them and other stakeholders requisite information in acting on levers that bring about change in the most efficacious manner possible.
To delve deeper, read the full summary here.
Take Action:
Events:
We’re partnering with Women in AI and the University of New South Wales (UNSW) in Australia to host a discussion about democratizing AI, disinformation, content moderation, AI in the APAC region and the region's responsibility in the AI debate.
📅 April 14th (Wednesday)
🕛 4:30PM–6:00PM EST
🎫 Get free tickets
The AI Ethics Learning Community
Just 3 weeks left in our 8-week cohort-based learning community. Come watch & hang out at the next one — we meet on Wednesdays at 5 PM EST.
📅 April 14th (Wednesday)
🕛 5:00 PM – 6:30 PM EST
🎫 Get free tickets