Discover more from The AI Ethics Brief
AI Ethics Brief #141: Copyrights+IPR in GenAI era, ethical ambiguity in data enrichment, robotics+AI in the Global South, and more.
To what extent will we see Big Tech companies work with news organizations and other stakeholders in the ecosystem to meet the demands being posed on issues like copyrights, IPR, etc.?
Welcome to another edition of the Montreal AI Ethics Institute’s weekly AI Ethics Brief that will help you keep up with the fast-changing world of AI Ethics! Every week, we summarize the best of AI Ethics research and reporting, along with some commentary. More about us at montrealethics.ai/about.
Support our work through Substack
💖 To keep our content free for everyone, we ask those who can, to support us: become a paying subscriber for the price of a couple of ☕.
If you’d prefer to make a one-time donation, visit our donation page. We use this Wikipedia-style tipping model to support our mission of Democratizing AI Ethics Literacy and to ensure we can continue to serve our community.
This week’s overview:
🙋 Ask an AI Ethicist:
What are some strategies that one can adopt to bring Responsible AI to discussions within one’s organization, especially when one is not in a role that has that as a part of their job description?
✍️ What we’re thinking:
ChatGPT Will Change Many Things—But It Won’t Change Everything
🤔 One question we’re pondering:
To what extent will we see Big Tech companies work with news organizations and other stakeholders in the ecosystem to meet the demands being posed on issues like copyrights, IPR, etc.?
🪛 AI Ethics Praxis: From Rhetoric to Reality
What might be a few fundamental steps that we can take towards gearing up to address the copyright and IPR issues that arise from using internet data to train AI models?
🔬 Research summaries:
Enough With “Human-AI Collaboration”
The ethical ambiguity of AI data enrichment: Measuring gaps in research ethics norms and practices
Creative Agents: Rethinking Agency and Creativity in Human and Artificial Systems
📰 Article summaries:
Why the EU’s landmark AI Act was so hard to pass - The Verge
A slew of deepfake video adverts of Sunak on Facebook raises the alarm over AI risk to election
Robotics and AI in the Global South
📖 Living Dictionary:
What is the relevance of frontier models to AI ethics?
🌐 From elsewhere on the web:
The Phenomenology of Facial Recognition Technology
💡 ICYMI
A slew of deepfake video adverts of Sunak on Facebook raises the alarm over AI risk to election
🚨 Some regulatory and legal developments taking place around the world - here’s our quick take on what happened last week.
Here is what caught our attention in terms of the regulatory and legal developments taking place related to responsible AI:
Australia's AI Content Labeling Proposal: The Australian government is considering asking tech companies to label content generated by AI platforms, such as ChatGPT. This is part of a broader initiative to implement stricter regulations for 'high risk' AI products, with the aim of increasing transparency and public trust in AI technologies. The government plans to set up an expert advisory group, develop a voluntary 'AI safety standard', and consult with the industry on new transparency measures. Mandatory safeguards and pre-deployment risk assessments for AI are also on the table for further consideration.
EU's Digital Markets Act (DMA) Compliance: A group of companies, including Ecosia, Qwant, and Schibsted, have signed an open letter accusing major tech firms like Google, Microsoft, and Apple of not doing enough to comply with the EU's Digital Markets Act. The letter calls on the European Commission and the European Parliament to ensure compliance ahead of a key deadline on March 7, 2024. The DMA requires these 'gatekeeper' platforms to engage in fair practices and not to implement practices that lead to self-preferencing. The signatories of the letter express concerns that businesses and consumers are being kept in the dark about compliance efforts and that the tech giants have failed to engage in meaningful dialogue with third parties.
Did we miss anything?
🙋 Ask an AI Ethicist:
Every week, we’ll feature a question from the MAIEI community and share our thinking here. We invite you to ask yours, and we’ll answer it in the upcoming editions.
Here are the results from the previous edition for this segment:
Given the strong focus on regulatory compliance in the healthcare sector, it is unsurprising that we see a strong push from the industry to gear up to meet the ethical challenges. In particular, the healthcare sector also benefits from history in thinking about parallel issues like informed consent that arise in clinical settings that can inform the work in bringing GenAI into the healthcare environment. Perhaps as other industries adopt more GenAI into their operations, they’ll either adapt solutions from adjacent industries or come with novel solutions to unique problems that face them. Either way, if GenAI adoption is to be successful in 2024, it can’t happen without organizations investing in Responsible AI.
Moving on to the reader question for this week, A.M. writes to us asking what are some strategies that they can adopt to bring Responsible AI to discussions within their organization, especially when they are not in a role that has that as a part of their job description.
This is a great question, especially with increasing adoption of AI, it will be something that a lot of conscientious staff will have to find good solutions to. We’ve guided many budding practitioners in the ecosystem towards addressing this exact question and feedback has shown that the following 3 strategies to work really well:
Educational Approach: Start by informally educating your team about AI ethics. This can be initiated by sharing articles, case studies, or recent news related to ethical issues in AI during team meetings or through internal communication channels.
Action Steps:
Curate a list of accessible and relevant resources (articles, podcasts, videos) about AI ethics.
Share one resource per week (this newsletter can be a source for that!) with a brief summary of why it's relevant to your team's work.
Suggest a short, informal discussion during team meetings to reflect on these topics.
Integration in Existing Processes: Propose the integration of ethical considerations into your team's existing workflows or processes. This approach ensures that ethics becomes a natural part of decision-making rather than an external imposition.
Action Steps:
Identify a regular process (like project planning, design reviews, or sprint retrospectives) where ethical considerations can be relevant.
Draft a simple checklist or set of questions that guide ethical considerations (e.g., "Does this feature potentially impact user privacy?").
Propose this integration in your next team meeting, highlighting how it adds value without significantly disrupting existing workflows.
Leveraging External Expertise: Advocate for consultation with external experts in AI ethics. This can be through workshops, seminars, or consulting sessions. It externalizes the expertise while underscoring the importance of the subject.
Action Steps:
Research and compile a list of potential experts or organizations specializing in AI ethics.
Present a proposal to your manager or team leader, outlining the benefits of such an engagement.
Suggest specific formats, like a one-time workshop or a series of short seminars, that fit into your team's schedule.
Each of these strategies should be approached with a collaborative mindset, emphasizing the shared goal of responsible and ethical AI development. Remember to frame these suggestions in a way that highlights their relevance to your team's objectives and the broader organizational goals.
Have you / are you in such a position in your organization at the moment? Please let us know! Share your thoughts with the MAIEI community:
✍️ What we’re thinking:
ChatGPT Will Change Many Things—But It Won’t Change Everything
The release of ChatGPT and other generative AI systems is a transformational moment in the democratization of AI, enabling more people to use the technology and lowering organizations’ barriers to entry. It’s easy to get swept up in the excitement: ChatGPT set the internet abuzz with its astounding capabilities and amusing failures. Observers quickly predicted the end of high-school essays, lamented the death of creative writing, and raised alarms about the future of work.
Such reactions are to be expected. We’re witnessing the classic Gartner Hype Cycle, and summiting the peak of expectations with the current wave of Generative AI systems. But despite how fast we’re moving, business leaders can’t ignore the fact that, when the hype is stripped away, these systems are just technology. And, like any technology, what ultimately matters most is not just whether businesses use it, but how its use will affect business models and how it can change businesses’ relationships with customers. Organizations that consider the pace of change but stay focused on the fundamentals will reap the biggest rewards from adopting Generative AI.
To delve deeper, read the full article here.
🤔 One question we’re pondering:
Looking at the regulatory and legal developments taking place around the world, to what extent will we see Big Tech companies work with news organizations and other stakeholders in the ecosystem to meet the demands being posed on issues like copyrights, IPR, etc.? Government intervention may be required but isn’t the only avenue for reaching workable solutions and hopefully there will be a multi-pronged approach to addressing these challenges.
We’d love to hear from you and share your thoughts with everyone in the next edition:
🪛 AI Ethics Praxis: From Rhetoric to Reality
Building on the above, as practitioners, what might be a few fundamental steps that we can take towards gearing up to address the copyright and IPR issues that arise from using internet data, especially if you are unsure of provenance and rely on commonly used large-scale datasets that are typically being used to train the current generation of LLMs and other AI models.
The way we like to think about it is the following, breaking down the problem into (1) compliance with legal requirements, (2) ethical data sourcing, and (3) continuous monitoring to address emerging issues:
Compliance with Copyright Law and Fair Use Principles:
Legal Assessment: Establish a legal team to continuously assess and interpret copyright laws as they pertain to AI training. This includes understanding different jurisdictions and international copyright treaties.
Fair Use Analysis: Develop criteria for evaluating whether the use of data in AI training constitutes fair use, considering factors like the nature of the copyrighted work, the amount and substantiality of the portion used, the effect on the work’s value, and the purpose of use (e.g., commercial vs. educational).
Automated Filtering Systems: Implement automated systems to filter out copyrighted content that doesn't meet fair use criteria or where permission is not granted.
Ethical Data Sourcing and Transparency:
Consent and Attribution: Whenever feasible, seek consent from content creators for the use of their work in AI training. Provide attribution and recognition, respecting the moral rights of creators.
Transparency in Data Usage: Maintain transparency about the sources of data used in AI training. This could involve publicly listing the types of data and general sources, without compromising proprietary information or individual privacy.
Partnerships with Content Providers: Form partnerships with content providers and platforms to ensure a mutual understanding and agreement on the use of data.
Continuous Monitoring and Adaptation:
Feedback Mechanisms: Establish mechanisms for content creators to report misuse of their work and request removal from the training dataset.
Regular Audits: Conduct regular audits of AI training processes to ensure compliance with copyright and IPR guidelines.
Adaptive Policies: Be prepared to adapt policies and practices in response to changes in copyright laws, technological advancements, and societal expectations.
A word of caution to the practitioners who want to adopt the above framework: nothing comes without a cost! In this case, the appropriate allocation of resources is essential if you want the implementation of this framework to be successful. A common cause of failure that we’ve observed is enthusiasm to adopt this framework without assigning personnel and resources for execution that inevitably leads to failure within a few months.
You can either click the “Leave a comment” button below or send us an email! We’ll feature the best response next week in this section.
🔬 Research summaries:
Enough With “Human-AI Collaboration”
The term “human-AI collaboration” is misleading and inaccurate. It erases the labor of AI producers and obscures the often exploitative relationship between AI producers and consumers. Instead of viewing AI as a collaborator, we should view it as a tool or an instrument. This is more accurate and ultimately fairer to the humans who create and use AI.
To delve deeper, read the full summary here.
The ethical ambiguity of AI data enrichment: Measuring gaps in research ethics norms and practices
Breakthrough AI research has increasingly relied upon engagement with online crowdsourced workers to collect data to ‘enrich’ models through processes such as “Reinforcement Learning from Human Feedback” (RLHF). This paper assesses how AI research papers involving data enrichment work consider ethics issues, such as consent, payment terms, and worker maltreatment, highlighting inconsistency and ambiguity in existing AI ethics disclosure norms.
To delve deeper, read the full summary here.
Creative Agents: Rethinking Agency and Creativity in Human and Artificial Systems
The impact of the technological revolution of the last decades has a reach beyond information transmission, the automation of production lines, and data analysis and modeling. It also influences the way in which we use and understand terms such as “agency,” “creativity,” “authorship,” and “responsibility.” Should we safeguard the fundamental human aspects of these concepts or, rather, rethink and redefine them? This research paper aims to answer this question by examining the results of a study designed to test the interaction between the attribution of agency and creativity to human and artificial subjects.
To delve deeper, read the full summary here.
📰 Article summaries:
Why the EU’s landmark AI Act was so hard to pass - The Verge
What happened: European lawmakers reached a provisional deal on the EU AI Act after a lengthy and intense negotiation, culminating in a three-day marathon debate. Initially proposed in April 2021, the AI Act aimed to address the risks and negative consequences of AI deployment, focusing on applications in policing, job recruitment, and education. However, as AI technology rapidly evolved, especially with the emergence of powerful "foundation" models like OpenAI's ChatGPT, the legislation faced challenges in adapting to these new developments.
Why it matters: The significance lies in the attempt to regulate AI technology to mitigate potential risks and negative impacts on individuals and society. The AI Act introduced a tier system categorizing AI applications by risk, with higher-risk systems facing more stringent regulations. The contentious issues of facial recognition and general-purpose AI systems prompted intense debates among European lawmakers. The compromise includes exceptions permitting limited use of automated facial recognition and addressing concerns raised by some EU member states like France, which sought AI-assisted surveillance for security purposes. However, human rights organizations like Amnesty International criticized the concessions, advocating for a complete ban on facial recognition due to its potential human rights harms.
Between the lines: The text emphasizes the complexity of the negotiation process and the challenges in crafting precise wording for compromises, as the full AI Act text is not immediately available. The provisional agreement is subject to change, and the final legislation is expected to become law by mid-2024, with provisions gradually coming into force over the following two years. The delay gives policymakers and AI companies time to refine enforcement mechanisms and ensure compliance.
A slew of deepfake video adverts of Sunak on Facebook raises the alarm over AI risk to election
What happened: Over 100 deepfake video advertisements impersonating Rishi Sunak were promoted on Facebook in the past month, spending more than £12,929 and originating from 23 countries, raising concerns about AI's manipulation risk before the general election. The deepfakes falsely depict Sunak as being involved in a scandal, claiming Elon Musk launched an application to collect stock market transactions, which led to a spoofed BBC News page promoting a scam investment. This marks the first systematic and widespread use of deepfakes to manipulate the prime minister's image.
Why it matters: Research by Fenimore Harper, a communications company founded by a former Downing Street official, warns that the quality of these deepfakes indicates a heightened risk of AI-generated falsehoods manipulating elections. The ease of creating convincing deepfakes and lax moderation policies on paid advertising poses a significant threat. With elections approaching, there is a pressing need for robust measures to counter AI-driven misinformation, especially considering the potential impact on democratic processes.
Between the lines: The deepfake incident highlights the urgency for comprehensive changes in the electoral system to address the challenges posed by AI, as regulators express concerns about the timing before the next general election. The UK government emphasizes its efforts to swiftly respond to threats through a defending democracy taskforce and the Online Safety Act, which imposes new requirements on social platforms to remove illegal misinformation, including AI-generated content. However, the incident emphasizes the need for more proactive measures to tackle the evolving landscape of AI-driven disinformation, deepfakes, and their potential influence on public opinion during election campaigns.
Robotics and AI in the Global South
What happened: The traditional distinction between the Global North and Global South, primarily rooted in socioeconomic and political factors, has often overlooked the significant talent and contributions emerging from the Global South in the fields of robotics and AI. Despite facing challenges, several centers of excellence in the Global South are making influential contributions to engineering and innovation, particularly in the areas of robotics and AI.
Why it matters: Much of the robotics and AI research in the Global South is focused on addressing crucial challenges related to healthcare, food security, environmental monitoring, and disaster response. Examples include AI tools for diabetic retinopathy screening in regions with a shortage of ophthalmologists and drones for wildlife monitoring and disease control. Medical robotics is expanding, with African training institutes and efforts to improve access to surgical care. These technologies' ethical and responsible deployment in the Global South is crucial, and international regulations are being formulated to ensure their responsible use.
Between the lines: The text emphasizes the importance of deploying innovative technologies in the Global South with ethical considerations and safety in mind. It also highlights the need to avoid "helicopter research" or scientific colonialism and emphasizes fair and equitable research partnerships. The recent Cape Town Statement on Research Integrity provides guiding recommendations for ensuring fairness and equity in research collaborations, aiming to establish a global code of conduct for researchers involved in collaborative efforts.
📖 From our Living Dictionary:
What is the relevance of frontier models to AI ethics?
👇 Learn more about why it matters in AI Ethics via our Living Dictionary.
🌐 From elsewhere on the web:
The Phenomenology of Facial Recognition Technology
Our partnerships manager, Connor Wright, presented his work at the (SACAIR2021) Southern African Conference for Artificial Intelligence Research.
His paper titled “The Phenomenology of Facial Recognition Technology” accepted for presentation, where he shared his thoughts on how adopting a phenomenological lens can help better shape governance of facial recognition technology.
The main take away would be that acknowledging how the technology can be experienced by people of different backgrounds allows for a more holistic evaluation of whether to use the technology or not.
To delve deeper, read the full article here.
💡 In case you missed it:
A hunt for the Snark: Annotator Diversity in Data Practices
Diversity in datasets is a key component to building responsible AI/ML. Despite this recognition, we know little about the diversity among the annotators involved in data production. This paper takes a justice-oriented approach to investigate how AI/ML practitioners envision the diversity of data annotators both conceptually and practically. Drawing upon the feminist critique on objectivity, we explore alternative ways of accounting for annotator subjectivity and diversity in data practices.
To delve deeper, read the full article here.
Take Action:
We’d love to hear from you, our readers, on what recent research papers caught your attention. We’re looking for ones that have been published in journals or as a part of conference proceedings.