AI Ethics Brief #65: Generative Art to explore AI ethics, logic of strategic assets, deepfake of Anthony Bourdain, and more ...
What is Congress doing to crack down on Big Tech?
Welcome to another edition of the Montreal AI Ethics Institute’s weekly AI Ethics Brief that will help you keep up with the fast-changing world of AI Ethics! Every week we summarize the best of AI Ethics research and reporting, along with some commentary. More about us at montrealethics.ai/about.
⏰ This week’s Brief is a ~14-minute read.
Support our work through Substack
💖 To keep our content free for everyone, we ask those who can, to support us: become a paying subscriber for as little as $5/month. If you’d prefer to make a one-time donation, visit our donation page. We use this Wikipedia-style tipping model to support the democratization of AI ethics literacy and to ensure we can continue to serve our community.
*NOTE: When you hit the subscribe button, you may end up on the Substack page where if you’re not already signed into Substack, you’ll be asked to enter your email address again. Please do so and you’ll be directed to the page where you can purchase the subscription.
This week’s overview:
✍️ What we’re thinking:
Exploring the under-explored areas in teaching tech ethics today
🔬 Research summaries:
The Logic of Strategic Assets: From Oil to AI
Building Bridges: Generative Artworks to Explore AI Ethics
📰 Article summaries:
What Is Congress’s Plan to Crack Down on Big Tech?
Targeted ads isolate and divide us even when they’re not political – new research
The Ethics of a Deepfake Anthony Bourdain Voice
Huge data leak shatters the lie that the innocent need not fear surveillance
But first, our call-to-action this week:
At the Montreal AI Ethics Institute, my staff and I have been spending a lot of time thinking about what hinders the movement from principles to practice in AI ethics and we'd love to hear from you on what you think is the bigger blocker and requires more immediate attention?
We’ve had a very engaging conversation so far with folks chiming in on the following question:
What has been a bigger challenge in moving from principles to practices in AI ethics?
(A) Organizational challenges
(B) Technical challenges
Please do participate and learn from what the rest of the community thinks about this as well here!
✍️ What we’re thinking:
Office Hours:
Exploring the under-explored areas in teaching tech ethics today
Join us again for some new exciting ideas on how to shape curriculum design in the ethics of tech space. This month Chris McClean shares his experience as the global lead for digital ethics at Avanade, and we are excited to learn more about how it trains tech and business professionals to recognize the most pressing ethical challenges. And as always, please get in touch if you want to share your opinions and insights on this fast-developing field.
To delve deeper, read the full article here.
🔬 Research summaries:
The Logic of Strategic Assets: From Oil to AI
Does AI qualify as a strategic good? What does a strategic good even look like? The paper aims to provide a framework for answering both of these questions. One thing’s for sure; AI is not as strategic as you may think.
To delve deeper, read the full summary here.
Building Bridges: Generative Artworks to Explore AI Ethics
The paper outlines some ways in which generative artworks could aid in narrowing the communication gaps between different stakeholders in the AI pipeline. In particular, the authors argue that generative artworks could help surface different ethical perspectives, highlight mismatches in the AI pipeline, and aid in the visualization of counterfactual scenarios, and non-western ethical perspectives.
To delve deeper, read the full summary here.
📰 Article summaries:
What Is Congress’s Plan to Crack Down on Big Tech?
What happened: Six bills are being introduced in the US Congress that come from a 16-month House Judiciary Committee investigation into the antitrust behaviour of tech giants. Two of the proposed bills have very little controversy and are expected to pass without much furore: one that increases merger filing fees and another that limits the moving around of antitrust cases from one state to another, something that has been misused in the past by tech companies to obtain more favorable jurisdictions and judiciaries along with a delay in the process and increase in cost for the case. The other 4 bills are expected to raise quite a bit of fuss since they target antitrust and anti-competitive behaviour of tech giants including things like more stringent limits on the favoritism of companies to feature their own products and services on their platforms, limits on using insights from competitor behaviour on their platform to develop and promote their own offerings, limits on mergers and acquisitions that reduce market competition and a push for increasing interoperability and data portability between different services in the market, thus increasing consumer choice.
Why it matters: Each of these bills present solid cases for what can be achieved through the legislative process in reining in Big Tech and making sure that consumer welfare is kept top of mind in a world where monopolies abound and unethical behaviour is hard to control, especially when there are strong network effects and platform lock-in for almost all of the products and services that we use.
Between the lines: While there is bipartisan support for the bills, what will be interesting to see is how successful these bills end up being in creating an ecosystem where ethical and competitive practices become the norm rather than the exception. Adequate enforcement mechanisms also need to be envisioned if these are going to become a success.
Targeted ads isolate and divide us even when they’re not political – new research
What happened: When we think about divisive ads, political ads always come first to our minds. This article argues that commercial ads post an equally pernicious threat to the epistemic integrity of our information ecosystem online. Drawing on an interesting example regarding body positivity in the London underground, passengers complained to the regulator that the ads promoted unhealthy stereotypes and prompted action from the regulator in taking down the ad. Yet, out of the hundreds of thousands of passengers, only 387 filed such a complaint, presumably some were stirred by the graffiti on those ads prompting them to take action as well.
Why it matters: In the online world, we are neatly segmented into various categories (whether accurate and reflective of us or not) that make it difficult to understand and gain collective knowledge about whether some ads might be causing us harm without us even realizing, as was the case with the London underground example which manifested in the physical world. Commercial ads can cause harm both through their targeted messaging towards vulnerable populations like showing gambling addicts ads for casinos and omission of ads, say job postings to only a certain gender.
Between the lines: Given that most of our focus remains on tackling the problem of political ads on platforms, this article presents a compelling case for thinking more deeply about the impact that commercial ads have on us. There are certain policies and regulations, in the US for example there are regulations around disability, housing, and employment that might make some ads or omission of ads illegal, but for the most part it remains an understudied area. This problem is exacerbated by the fact that it is incredibly difficult to obtain the necessary information across a broad swathe of users without enrolling them in a study which can cost a lot of money, as is the case with the Citizen Browser project from The Markup.
The Ethics of a Deepfake Anthony Bourdain Voice
What happened: In the documentary titled “Roadrunner” about the life of Anthony Bourdain, there were segments of audio that were synthesized using previous audio data from his real voice. The words that were uttered in this synthetic voice were words he had actually written down. The use of synthetic media is rife with ethical troubles, as it became evident with the backlash that the producers of the documentary have faced since the release of their film. Notably, people have expressed concerns also in terms of disclosure that synthetic voice was used and the flippance with which the people involved in the making of the film dismissed some of the concerns when they were brought up the first time.
Why it matters: Synthetic media, notably deepfakes, is notorious for all the harm that it can cause. In some cases, there are positive uses for deepfakes as have outlined in a previous edition of this newsletter. But, when consent and intent is not clear, ethical qualms arise, especially for someone who passed recently and heavily emphasized authenticity as something he valued in his work.
Between the lines: We will see a rise in the use of synthetic media over time, especially as the technology becomes easier to use and the data available to train these systems becomes more widespread, as is the case with our increasing digital footprint. Building awareness around what constitutes appropriate and inappropriate use of synthetic media will make us more informed and nuanced in our discussions rather than lionizing or demonizing its use with a careful study of the underlying ideas of disclosure, consent, and context which are essential to discussing the ethics in the first place.
Huge data leak shatters the lie that the innocent need not fear surveillance
What happened: The firm NSO that is known to have sold surveillance software to organizations, including governments around the world, has come under fire for its software Pegasus which is under investigation for its implication in the surveillance on not just typical targets of spy-craft around the world but also everyday citizens Over the coming weeks, The Guardian’s investigative team in partnership with other news organizations around the world will be releasing the names of the people who have been a target of the software and the compromises facilitated by it. The case that they seek to make is that anybody is susceptible to these intrusions given our over-reliance on our phones and why privacy as a core tenet of functioning in our digital society needs to be something that we pay a lot more attention to.
Why it matters: While there are some regulations perhaps in the use of surveillance technology when government agencies deploy them in their intelligence operations (though a lot of that was debunked with the Snowden leaks in 2013), the operations of a player like the NSO and their ability to sell their tools and services to anyone on the market change the equation significantly in terms of what privacy guarantees we can hope to have as individuals spending a chunk of our lives in the digital realm.
Between the lines: The seriousness of the matter is underscored by some phrases in the article that highlight the degree of protections that the staff behind this story had taken including not having any of their phones around during their meetings, sources, etc. They position this as something that will be as monumental as their investigation and publication of results at the time of the Snowden leaks that moved the needle of understanding for the public on what spy-craft capabilities exist and how they are being used. Now, that conversation will expand to include anyone in the world, innocent or not.
From our Living Dictionary:
‘Deepfakes’
👇 Learn more about why it matters in AI Ethics via our Living Dictionary.
In case you missed it:
Owning Ethics: Corporate Logics, Silicon Valley, and the Institutionalization of Ethics
This paper outlines the role of “ethics owners,” a new occupational group in the tech industry, whose jobs are to examine ethical consequences of technological innovations. The authors highlight competing logics that they have to navigate, and two ethical pitfalls that might result from those different imperatives.
To delve deeper, read the full summary here.
Take Action:
At the Montreal AI Ethics Institute, my staff and I have been spending a lot of time thinking about what hinders the movement from principles to practice in AI ethics and we'd love to hear from you on what you think is the bigger blocker and requires more immediate attention?
We’ve had a very engaging conversation so far with folks chiming in on the following question:
What has been a bigger challenge in moving from principles to practices in AI ethics?
(A) Organizational challenges
(B) Technical challenges
Please do participate and learn from what the rest of the community thinks about this as well here!