Discover more from The AI Ethics Brief
AI Ethics Brief #126: Shadow AI, beyond bias and discrimination, national registry for AI, and more.
What are some approaches that can increase transparency in AI systems that are being developed or used within an organization?
Welcome to another edition of the Montreal AI Ethics Institute’s weekly AI Ethics Brief that will help you keep up with the fast-changing world of AI Ethics! Every week we summarize the best of AI Ethics research and reporting, along with some commentary. More about us at montrealethics.ai/about.
Support our work through Substack
💖 To keep our content free for everyone, we ask those who can, to support us: become a paying subscriber for the price of a couple of ☕.
If you’d prefer to make a one-time donation, visit our donation page. We use this Wikipedia-style tipping model to support our mission of Democratizing AI Ethics Literacy and to ensure we can continue to serve our community.
This week’s overview:
🙋 Ask an AI Ethicist:
Can Generative AI really be creative?
✍️ What we’re thinking:
Beware the Emergence of Shadow AI
The Evolution of the Draft European Union AI Act after the European Parliament’s Amendments
🤔 One question we’re pondering:
What are some approaches that can increase transparency in AI systems that are being developed or used within an organization?
🔬 Research summaries:
Equal Improvability: A New Fairness Notion Considering the Long-term Impact
On the sui generis value capture of new digital technologies: The case of AI
Beyond Bias and Discrimination: Redefining the AI Ethics Principle of Fairness in Healthcare Machine-Learning Algorithms
📰 Article summaries:
It’s Time to Create a National Registry for Large AI Models
Copyright Fair Use Regulatory Approaches in AI Content Generation
The AI Crackdown Is Coming
📖 Living Dictionary:
What is open source?
🌐 From elsewhere on the web:
How to Make Generative AI Greener
💡 ICYMI
Attacking Fake News Detectors via Manipulating News Social Engagement
🤝 You can now refer your friends to The AI Ethics Brief!
Thank you for reading The AI Ethics Brief — your support allows us to keep doing this work. If you enjoy The AI Ethics Brief, it would mean the world to us if you invited friends to subscribe and read with us. If you refer friends, you will receive benefits that give you special access to The AI Ethics Brief.
How to participate
1. Share The AI Ethics Brief. When you use the referral link below, or the “Share” button on any post, you'll get credit for any new subscribers. Simply send the link in a text, email, or share it on social media with friends.
2. Earn benefits. When more friends use your referral link to subscribe (free or paid), you’ll receive special benefits.
Get a 3-month comp for 25 referrals
Get a 6-month comp for 75 referrals
Get a 12-month comp for 150 referrals
🤗 Thank you for helping get the word out about The AI Ethics Brief!
🚨 The Responsible AI Bulletin
We’ve restarted our sister publication, The Responsible AI Bulletin, as a fast-digest every Sunday for those who want even more content beyond The AI Ethics Brief. (our lovely power readers 🏋🏽, thank you for writing in and requesting it!)
The focus of the Bulletin is to give you a quick dose of the latest research papers that caught our attention in addition to the ones covered here.
🙋 Ask an AI Ethicist:
Every week, we’ll feature a question from the MAIEI community and share our thinking here. We invite you to ask yours and we’ll answer it in the upcoming editions.
Here are the results from the previous edition for this segment:
An equal split down the line for last’s week question, which goes on to highlight the difficulty that the field faces today in arriving at a consensus on how we should be governing AI systems, particularly when they are not beholden to any geographical constraints (minus jurisdictional requirements like GDPR in the EU).
Moving on to a question that a couple of readers asked us this past week, “Can Generative AI really be creative?”
Systems like DALL-E 2 can output novel visual art based on different input prompts, which can be said to demonstrate a certain level of creativity. Yet, that comes from recombining bucketloads of training data (often without consent from the artists) that come from large-scale datasets scraped from the Internet.
Similarly, text-generation systems like ChatGPT, powered now by plug-ins, also demonstrate a huge range of possible outputs, whereby unpredictability in the outputs can show some spark of creativity. Yet, ultimately, they are still using statistical techniques to predict the most likely (and most “useful”) next token so that the output meets the requirements posed by the user. It also suffers from the same problem of having sucked up huge swathes of public and private information from datasets that raises many ethical and copyright issues.
Even though these systems can produce new recipes, melodies, jokes, stories, and other creative content, they lack intention, emotion, life experiences, and other human qualities that incubate creativity. So, it is hard to decisively say one way or another whether these systems exhibit sparks of creativity or if we assign them too much credit for what amounts to an (amazing) engineering feat.
Given the above discussion points, what kinds of evidence would you want to see to lean towards making an argument for/against the creativity of Generative AI systems? Share your thoughts with the MAIEI community:
✍️ What we’re thinking:
Beware the Emergence of Shadow AI
The enthusiasm for generative AI systems has taken the world by storm. Organizations of all sorts– including businesses, governments, and nonprofit organizations– are excited about its applications, while regulators and policymakers show varying levels of desire to regulate and govern it.
Old hands in the field of cybersecurity and governance, risk & compliance (GRC) functions see a much more practical challenge as organizations move to deploy ChatGPT, DALL-E 2, Midjourney, Stable Diffusion, and dozens of other products and services to accelerate their workflows and gain productivity. An upsurge of unreported and unsanctioned generative AI use has brought forth the next iteration of the classic “Shadow IT“ problem: Shadow AI.
To delve deeper, read the full article here.
The Evolution of the Draft European Union AI Act after the European Parliament’s Amendments
On 14 June 2023, the European Parliament adopted its proposed amendments (in legalese: negotiating position) to the draft European Union AI Act (the “Draft EU AI Act”). The proposed amendments significantly reshape the proposal by the European Commission by fine-tuning and expanding the scope of the Draft EU AI Act, its risk mitigation requirements, and its governance mechanism. The amendments also firmly center on universal ethical principles and human rights as ultimate benchmarks for assessing the social acceptability of AI systems developed and deployed in and out of the European Union.
To delve deeper, read the full article here.
🤔 One question we’re pondering:
In diving deep into our research work on governance, risk, and compliance (GRC) functions (see this article on “Beware the emergence of Shadow AI”), we’ve found that there is a huge gap in how AI systems development and use is tracked within an organization which leads to this problem of Shadow AI, posing a grave challenge for GRC. What are some approaches that can increase transparency in AI systems that are being developed or used within an organization?
We’d love to hear from you and share your thoughts back with everyone in the next edition:
🔬 Research summaries:
Equal Improvability: A New Fairness Notion Considering the Long-term Impact
Devising a fair classifier that does not discriminate against different groups (e.g., race, gender, and age) is an important problem in machine learning. However, most existing fairness notions focus on immediate fairness and ignore their potential long-term impact. In contrast, we propose a new fairness notion called Equal Improvability (EI), which aims to equalize the acceptance rate across different groups when each rejected sample tries to improve its feature.
To delve deeper, read the full summary here.
On the sui generis value capture of new digital technologies: The case of AI
This experimental paper invites the reader to consider artificial intelligence as a new form of value capture – where traditional automation concerns physical labor, AI relates (perhaps for the first time) to the capture of intellectual labor. Our main takeaway is to invite the reader to consider artificial intelligence as a representation of the capture of value sui generis and that this may be a step change in the capture of value vis à vis the emergence of digital technologies.
To delve deeper, read the full summary here.
The increasing implementation of ML algorithms in healthcare has made the need for fairness in healthcare ML algorithms (HMLA) a very urgent task. However, while the debate on fairness in the ethics of AI has grown significantly in the last decade, the concept of fairness as an ethical value has not yet been sufficiently explored thus far. This paper draws on moral philosophy to fill this gap. It shows how an ethical inquiry into the concept of fairness helps highlight shortcomings in the current conceptualization of fairness in HMLA and better redefine the AI ethics principle of fairness to design fairer HMLA.
To delve deeper, read the full summary here.
📰 Article summaries:
It’s Time to Create a National Registry for Large AI Models
What happened: Generative AI models like ChatGPT have entered the global stage, triggering a mix of curiosity and confusion. The White House held urgent discussions with tech CEOs behind these advancements while the U.S. Congress debated options and G7 nations consulted about the way forward. Perspectives on the models range from concerns about their immense power and potential risks to claims that they divert attention from pressing issues like inequality. Some view technology as a solution to societal problems but worry about its potential misuse by oppressive regimes.
Why it matters: The lack of transparency surrounding generative AI models raises critical questions about their development, benefits, and risks. Public and government leaders lack essential information to assess this pivotal moment in history. Existing knowledge is shaped solely by what companies reveal, leaving policymakers unaware of the full scope and safety measures of these models. Establishing a registration process is proposed as the first step for countries to gain insights into model development, mitigate risks of violating laws, and ensure compliance with regulations.
Between the lines: While technology companies hold exclusive information about the capabilities and methodologies of large language models, democratic legitimacy requires broader oversight. Critical decisions about the deployment, societal impacts, and ethical considerations of transformative technologies should not rest solely with corporate governance. To facilitate responsible governance, registration is suggested as a fundamental starting point. By shedding light on technology development, such schemes can pave the way for effective regulatory policies, ensuring a balance between innovation and societal well-being.
Copyright Fair Use Regulatory Approaches in AI Content Generation
What happened: The emergence of generative AI has prompted the attention of global technologists and policymakers. Policymakers in Washington are urgently addressing how intellectual property (IP) laws relate to AI. The Senate Subcommittee on Intellectual Property recently held its second hearing on AI and copyright law, paralleling growing public interest in understanding the impact of AI on authorship and idea ownership. This article delves into the intersection of generative AI, copyright law, and the fair use doctrine, examining four perspectives that have arisen to navigate copyright issues in the context of generative AI.
Why it matters: Generative AI learns through training on Input Works, making patterns to predict responses. Language models, for example, are trained on various documents to discern human expression nuances. This process also applies to image diffusion GAIs, which analyze correlations between text queries and image results. This exposes copyright law gaps. Copyright covers "expression," not ideas, but generative AI's complexity challenges this. Extreme views range from fair use minimalism, treating all Output Works as derivative, to fair use maximalism, asserting uniqueness. The fair use debate extends to the role of AI as a transformative tool.
Between the lines: Different countries grapple with copyright and AI in diverse ways. Japan follows a broad, fair use maximalist approach, China takes an inquiry-based conditional fair use maximalist approach, and Singapore and the UK explore more lenient regulations. The EU considers stringent regulations, allowing authors to opt out of GAI training. Uniform global intellectual property laws may be unlikely, yet the US can shape a balanced approach. Consistency in interpreting the fair use defense is vital for the global AI industry's success. Establishing a guiding philosophy now could set the stage for international AI regulation.
What happened: The Biden administration recently announced that seven leading AI tech companies, including OpenAI, Microsoft, Google, and Meta, had voluntarily committed to ensuring their AI products are safe, secure, and trustworthy. This follows a series of AI-focused White House summits, congressional testimonies, and government agency declarations indicating a serious approach to AI. The commitments involve third-party testing, bias reduction, and transparency. However, the announcement lacks enforcement mechanisms and detailed action plans, raising questions about its effectiveness.
Why it matters: Experts agree that self-declared safety by tech companies isn't sufficient. Concerns about "audit washing," where unsafe products gain approval, have arisen. Many propose that safety assessments should precede product releases. Existing laws and federal agencies can play a role in regulating AI applications, such as therapy bots and financial assistants. Legislation for AI regulation could be lengthy and contentious. Still, a more agile approach could involve the government setting AI standards for the models it uses and the research it funds. Such measures could drive industry-wide adoption of responsible AI practices.
Between the lines: Despite potential hurdles, including lobbying efforts, some form of AI regulation is inevitable. The tech industry has faced issues like privacy violations and poor worker treatment. While the Biden administration is working on legislation and guidelines, the absence of strict regulations might lead to unchecked product releases by tech companies. The effectiveness of these actions remains to be seen, and concerns about tech industry practices persist in the absence of comprehensive regulation.
📖 From our Living Dictionary:
👇 Learn more about why it matters in AI Ethics via our Living Dictionary.
🌐 From elsewhere on the web:
How to Make Generative AI Greener
Use a large model only when it offers significant value. It is important for data scientists and developers to know where the model provides value. If the usage of a 3x more power-hungry system increases the accuracy of a model by just 1–3% then it is not worth the extra energy consumption. More broadly, machine learning and artificial intelligence are not always required to solve a problem. Developers need to first do research and analysis of multiple alternative solutions and select an approach according to the findings. The Montreal AI Ethics Institute, for example, is actively working on this problem.
To delve deeper, read the full article here.
💡 In case you missed it:
Attacking Fake News Detectors via Manipulating News Social Engagement
Although recent works have exploited the vulnerability of text-based misinformation detectors, the robustness of social-context-based detectors has not yet been extensively studied. In light of this, we propose a multi-agent reinforcement learning framework to probe the robustness of existing social-context-based detectors. We offer valuable insights to enhance misinformation detectors’ reliability and trustworthiness by evaluating our method on two real-world misinformation datasets.
To delve deeper, read the full article here.
Take Action:
We’d love to hear from you, our readers, on what recent research papers caught your attention. We’re looking for ones that have been published in journals or as a part of conference proceedings.