Discover more from The AI Ethics Brief
AI Ethics Brief #121: Ideological alignment, three eras of ML, counter narratives against hate speech, and more.
Should we reimagine copyright law in the era of Generative AI?
Welcome to another edition of the Montreal AI Ethics Institute’s weekly AI Ethics Brief that will help you keep up with the fast-changing world of AI Ethics! Every week we summarize the best of AI Ethics research and reporting, along with some commentary. More about us at montrealethics.ai/about.
Support our work through Substack
💖 To keep our content free for everyone, we ask those who can, to support us: become a paying subscriber for the price of a couple of ☕.
If you’d prefer to make a one-time donation, visit our donation page. We use this Wikipedia-style tipping model to support our mission of Democratizing AI Ethics Literacy and to ensure we can continue to serve our community.
This week’s overview:
🙋 Ask an AI Ethicist:
Should we reimagine copyright law in the era of Generative AI?
✍️ What we’re thinking:
Designed for us: FOSSY Day 1
Knowledge, Workflow, Oversight: A framework for implementing AI ethics
🤔 One question we’re pondering:
What can we as the AI ethics community do to better support artists as they embark on securing their rights against copyright and IP violations from Generative AI systems?
🔬 Research summaries:
Exploiting The Right: Inferring Ideological Alignment in Online Influence Campaigns Using Shared Images
Using Pre-Trained Language Models for Producing Counter Narratives Against Hate Speech: a Comparative Study
Compute Trends Across Three Eras of Machine Learning
📰 Article summaries:
How New York Is Regulating A.I.
Ethics Teams in Tech Are Stymied by Lack of Support
The US Senate Wants to Rein In AI. Good Luck With That
📖 Living Dictionary:
What is the relevance of hallucinations in AI ethics?
🌐 From elsewhere on the web:
Hybrid Human-AI Conference: Day 1 of Summer in Munich
💡 ICYMI
Assessing the Fairness of AI Systems: AI Practitioners’ Processes, Challenges, and Needs for Support
🤝 You can now refer your friends to The AI Ethics Brief!
Thank you for reading The AI Ethics Brief — your support allows us to keep doing this work. If you enjoy The AI Ethics Brief, it would mean the world to us if you invited friends to subscribe and read with us. If you refer friends, you will receive benefits that give you special access to The AI Ethics Brief.
How to participate
1. Share The AI Ethics Brief. When you use the referral link below, or the “Share” button on any post, you'll get credit for any new subscribers. Simply send the link in a text, email, or share it on social media with friends.
2. Earn benefits. When more friends use your referral link to subscribe (free or paid), you’ll receive special benefits.
Get a 3-month comp for 25 referrals
Get a 6-month comp for 75 referrals
Get a 12-month comp for 150 referrals
🤗 Thank you for helping get the word out about The AI Ethics Brief!
🙋 Ask an AI Ethicist:
Every week, we’ll feature a question from the MAIEI community and share our thinking here. We invite you to ask yours and we’ll answer it in the upcoming editions.
Here are the results from the previous edition for this segment:
Something one of our community members brought up for discussion was “Should we reimagine copyright law in the era of Generative AI?” and that has sparked an interesting discussion with close collaborators of MAIEI with a background in technology law around what the underlying value of that question is.
Fundamentally, we think that copyright law exists to solve issues around theft of IP and usurping someone else's work unfairly robbing them of their due credit and rights. As is the case with any law, it can also be misused. When it comes to copyright law, in the assembling and usage of training datasets for foundation models that underpin current Generative AI systems, the lack of consent and attribution, and most importantly monetary compensation from the benefits derived is one issue. The other issue concerns the outputs produced by these systems which can be directly monetized and their secondary use as training data for future systems. Should the same laws apply or do we really need tweaks? For the former, the case is a lot clearer.
What do you think the approach should be to tackle the issue of copyrights for generated data and it's downstream use? Share your thoughts with the MAIEI community:
✍️ What we’re thinking:
Within computer science, the free and open-source (FOSS) community has long provided an oasis for developers looking to protect the public good. The software that powers significant areas of our lives might be designed for us, understood by us, and tailored to our needs. Or… they might not. Open sourcing allows users to create something useful, then reap the benefits of collective intelligence (CI) as a decentralized community comes together to improve it. Bugs are found, features added, and UI smoothed. Win-win-win.
At least two generations of developers, entrepreneurs, and other industry pros have been raised in the FOSS spirit. Emily is in Portland with many of them at FOSSY, the first yearly conference of its kind.
To delve deeper, read the full article series here.
Knowledge, Workflow, Oversight: A framework for implementing AI ethics
In this article, the first in the series, the author describes the framework they are creating to help organizations develop AI more responsibly. In a nutshell, they recommend that organizations design policies and actions to increase their knowledge of AI ethics, integrate this knowledge into their workflows, and create oversight structures to keep themselves accountable.
To delve deeper, read the full article here.
🤔 One question we’re pondering:
With the recent Senate Judiciary Committee holds hearing on AI and copyright, we were happy to see artist Karla Ortiz representing first-hand experience and domain expertise in the hearing to ground the discussions in the reality and negative impacts that artists are facing as a consequence of the proliferation of Generative AI systems and their associated copyright and IP issues. What can we as the AI ethics community do to better support artists as they embark on this?
We’d love to hear from you and share your thoughts back with everyone in the next edition:
🔬 Research summaries:
The US Department of Justice and others have alleged foreign interference in US politics through campaigns of malevolent online influence. Our recent work assesses the visual media shared by disinformation accounts from four of these online influence campaigns: Iran, Venezuela, Russia, and China. Our models demonstrate consistencies in the types of images these campaigns share, especially as they pertain to political ideologies, with each campaign tending toward conservative imagery.
To delve deeper, read the full summary here.
Many international institutions and countries are taking action to monitor, restrict, and remove online hate content. Still, results are not always satisfactory, and they are often charged with censorship. An alternative approach that emerged in recent years is based on the use of so-called Counter Narratives, which are de-escalating and fact-bound textual responses refuting the hateful messages in a non-aggressive way. In this scenario, automatization is needed to address the sheer amount of hate produced daily effectively. Therefore, we conducted a comparative study investigating the use of several AI-based Neural Language Models for an automatic counter-narrative generation to be used as a companion to NGO operators while tackling online hate.
To delve deeper, read the full summary here.
Compute Trends Across Three Eras of Machine Learning
Compute required for training notable Machine Learning systems has been doubling every six months — growing by a factor of 55 million over the last 12 years. This paper curates a dataset of 123 ML models and analyses their compute requirements. The authors find three distinct trends and explain them in three eras: Pre Deep Learning era, the Deep Learning era starting in 2012, and a new emerging Large-Scale trend in 2016.
To delve deeper, read the full summary here.
📰 Article summaries:
How New York Is Regulating A.I.
What happened: New York City has implemented rules on the use of AI in hiring and promotions, with potentially life-changing consequences for job candidates and employees aiming for career advancement. These rules build upon a 2021 law that initially covered only city residents but is expected to influence national practices. Starting from July 5, the city will enforce the law, requiring companies to inform job seekers in advance about their use of automated systems and mandating annual audits for bias.
Why it matters: The law introduces an "impact ratio" to assess the effect of A.I. software on protected groups of job candidates rather than focusing on the explainability of algorithmic decision-making. This shift in focus aims to address the algorithm's output rather than its internal workings.
Between the lines: While the law has been seen as a regulatory success toward ethical A.I. use, it has faced criticism from public interest advocates who believe it has been weakened, and business groups consider it impractical. Critics argue that the law failed to push back against dangerous and discriminatory technology. However, supporters acknowledge that the law has flaws but still recognize its importance as a starting point for regulation and a platform for learning and improvement.
Ethics Teams in Tech Are Stymied by Lack of Support
What happened: AI companies have faced criticism for their algorithms' discriminatory outcomes, leading to pledges of fairness and accountability. However, these promises are often viewed as mere "ethics washing." To investigate the implementation of ethics initiatives, Sanna Ali interviewed AI ethics workers from major tech companies, finding that these efforts face challenges in the industry's institutional environment. The study suggests that leadership should incentivize ethics integration in product development, support ethics teams, and establish bureaucratic structures for ethics reviews.
Why it matters: The research reveals inconsistencies in implementing responsible AI policies within the tech industry. Products are sometimes released without ethics team input for various reasons, including lack of collaboration, limited authority to mandate ethics reviews, and conflicts with other goals. Ali suggests that granting ethics teams formal authority, establishing bureaucratic structures for ethics reviews at the start of product development, and incentivizing collaboration between product and ethics teams can address these challenges.
Between the lines: Ethics workers face limitations in their interactions with product teams and require formal authority to address problems effectively. To navigate the nonhierarchical tech industry, bureaucratic structures are needed to require ethics reviews from the outset of product development. Incentivizing product teams to collaborate with ethics teams and acknowledging their efforts through bonuses can also foster a culture of responsible technology development.
The US Senate Wants to Rein In AI. Good Luck With That
What happened: Congress is facing pressure to regulate AI despite many lawmakers not clearly understanding what they need to regulate. While there is bipartisan optimism and concern about AI's impact, the election-focused Congress may struggle to address AI before it transforms various aspects of society. The closed-door AI briefings and the proposal for a national AI commission demonstrate attempts to approach AI regulation in a bipartisan manner. Still, critics from the industry are raising concerns about hasty regulation.
Why it matters: Congress has historically hesitated to regulate the tech industry, but the current pressure and influence of tech giants are prompting lawmakers like Schumer to act. The closed-door briefings and the proposal for an AI commission aim to address AI regulation in a more comprehensive and bipartisan manner. However, there are some concerns rushing into regulation without fully understanding the implications and intricacies of AI could have unintended consequences and stifle innovation.
Between the lines: The bipartisan AI working group acknowledges the potential for comprehensive AI legislation, similar to the CHIPS and Science Act of 2022. The all-encompassing nature of AI presents a unique challenge for lawmakers, and the relevant committees are expected to engage in extensive deliberations to determine the appropriate regulations. The hope is to reach a consensus on various aspects of AI regulation and package them together in a comprehensive bill. However, the magnitude and complexity of AI make it a daunting task.
📖 From our Living Dictionary:
What is the relevance of hallucinations in AI ethics?
👇 Learn more about why it matters in AI Ethics via our Living Dictionary.
🌐 From elsewhere on the web:
Hybrid Human-AI Conference: Day 1 of Summer in Munich
Summer in Munich is hot and full of energy. Emily Dardaman is there attending the second International Conference on Hybrid Human-Artificial Intelligence (HHAI), where scholars from around the world gather to examine the latest research on how humans and AIs can work together safely, effectively, and ethically. Given all the hype about AI in the past few months, it might be tempting to write off human-AI interaction as a trend – but it's better thought of as the bedrock for organizational strategy and individual career planning for the rest of our time. The further AI gets integrated into our lives, the trickier the questions we must answer about how our values are reflected in these systems!
To delve deeper, read the full article here.
💡 In case you missed it:
Assessing the Fairness of AI Systems: AI Practitioners’ Processes, Challenges, and Needs for Support
Various tools and processes have been developed to support AI practitioners in identifying, assessing, and mitigating fairness-related harms caused by AI systems. However, prior research has highlighted gaps between the intended design of such resources and their use within particular social contexts, including the role that organizational factors play in shaping fairness work. This paper explores how AI teams use one such process—disaggregated evaluations—to assess fairness-related harms in their own AI systems. We identify AI practitioners’ processes, challenges, and needs for support when designing disaggregated evaluations to uncover performance disparities between demographic groups.
To delve deeper, read the full article here.
Take Action:
We’d love to hear from you, our readers, on what recent research papers caught your attention. We’re looking for ones that have been published in journals or as a part of conference proceedings.