Discover more from The AI Ethics Brief
AI Ethics Brief #92: AI ethics as critical theory, conformity assessments, system cards, and more ...
What do we need to know about personal data security?
Welcome to another edition of the Montreal AI Ethics Institute’s weekly AI Ethics Brief that will help you keep up with the fast-changing world of AI Ethics! Every week we summarize the best of AI Ethics research and reporting, along with some commentary. More about us at montrealethics.ai/about.
⏰ This week’s Brief is a ~27-minute read.
Support our work through Substack
💖 To keep our content free for everyone, we ask those who can, to support us: become a paying subscriber for as little as $5/month. If you’d prefer to make a one-time donation, visit our donation page. We use this Wikipedia-style tipping model to support the democratization of AI ethics literacy and to ensure we can continue to serve our community.
*NOTE: When you hit the subscribe button, you may end up on the Substack page where if you’re not already signed into Substack, you’ll be asked to enter your email address again. Please do so and you’ll be directed to the page where you can purchase the subscription.
This week’s overview:
✍️ What we’re thinking:
More Trust, Less Eavesdropping in Conversational AI
Who’s watching? What you need to know about personal data security
🔬 Research summaries:
It’s COMPASlicated: The Messy Relationship between RAI Datasets and Algorithmic Fairness Benchmarks
Conformity Assessments and Post-market Monitoring: A Guide to the Role of Auditing in the Proposed European AI Regulation
Ethics as a service: a pragmatic operationalisation of AI Ethics
Why AI Ethics Is a Critical Theory
📰 Article summaries:
System Cards, a new resource for understanding how AI systems work
AI is on the front lines of the war in Ukraine
The New Rules of Data Privacy
📖 Living Dictionary:
Open Source
💡 ICYMI
Post-Mortem Privacy 2.0: Theory, Law and Technology
But first, our call-to-action this week:
What can we do as a part of the AI Ethics community to help those who are suffering in the current war in Ukraine? We’re looking for suggestions for example on tips (actions that we can all take) to prevent the spread of misinformation, better understanding of the deployment of LAWS on the battlefield, etc.
✍️ What we’re thinking:
Like Talking to a person
More Trust, Less Eavesdropping in Conversational AI
Voice technology poses unique privacy challenges that make it difficult to earn consumer trust. Lack of user trust is a massive hurdle to the growth and ethical use of conversational AI. As Big Tech companies and governments attempt to define data privacy rules, conversational AI must continuously become more transparent, compliant and accountable.
Data fuel machine learning. In conversational AI, the data include what we say to our devices and each other, even in the privacy of our homes. Data dignity advocates have called for more control over our personal data, and regulations have grown alongside new technologies.
To delve deeper, read the full article here.
Who’s watching? What you need to know about personal data security
As the world becomes more digital and reliant on advanced technologies in everyday life, there is increased concern about the security of our personal data. Individual citizens are at a higher risk of being tracked and analyzed on the Internet than ever before, with the uses of personal data ranging from arbitrary matters, like targeted ads on Facebook, to more dangerous threats, such as credit card theft. Personal data is collected in many ways, including through third-party operators or public sources, such as Google, and can be done without the explicit consent of a user. On many websites, there are Terms of Service or ‘cookies’ to accept before entering the site. According to a 2017 Deloitte survey of 2000 American consumers, over 90% of people consent to terms and conditions without reading them. Internet users often unknowingly consent to their personal information being collected by ‘Big Data’ companies, but there are also instances of websites implementing undetectable trackers in websites to non-consensually collect data. The lack of public knowledge over what is collected and how it is being done produces a significant amount of questions surrounding the ethics of personal data collection. As the number of ‘Big Data’ corporations increase and the threat of data security becomes more apparent, the general public must understand how their personal data is collected and used to ensure the right to privacy is not violated.
To delve deeper, read the full article here.
🔬 Research summaries:
It’s COMPASlicated: The Messy Relationship between RAI Datasets and Algorithmic Fairness Benchmarks
Criminal justice (CJ) data is not neutral or objective: it emerges out of a messy process of noisy measurements, individual judgements, and location-dependent context. However, by ignoring the context around risk assessment instrument (RAI) datasets, computer science researchers both risk reinforcing upstream value judgements about what the data should say, and the downstream effects of their models on the justice system. The authors argue that responsibly and meaningfully engaging with this data requires computer scientists to explicitly consider the context of and values within these datasets.
To delve deeper, read the full summary here.
The proposed European Artificial Intelligence Act (AIA) is likely to become an important reference point that establishes precedence in terms of how AI systems can be regulated. The two primary enforcement mechanisms proposed in the AIA, have been little studied, however. These consist of conformity assessments that providers of high-risk AI systems are expected to conduct, as well as post-market monitoring plans that providers must establish to document the performance of high-risk AI systems throughout their lifetimes. This summary provides a brief overview of both mechanisms.
To delve deeper, read the full summary here.
Ethics as a service: a pragmatic operationalisation of AI Ethics
With increasingly ubiquitous AI comes the greater need for awareness of ethical issues. However, the current regulations are inadequate to save humanity from the possible AI harm and hence enter a stream of guidelines, frameworks, and ethics codes. This paper talks about the effective operationalization of AI ethics in algorithm design.
To delve deeper, read the full summary here.
Why AI Ethics Is a Critical Theory
How can we solve the problems associated with the principled approach to AI Ethics? One way to do so is to focus on AI Ethics as a critical theory. This all begins with exploring how AI principles could all bear a common thread in the form of power, emancipation and empowerment.
To delve deeper, read the full summary here.
📰 Article summaries:
System Cards, a new resource for understanding how AI systems work
What happened: In building upon mechanisms in the spirit of model cards and datasheets, Meta has announced System Cards as a way for understanding how AI systems work. It is meant for both technical and non-technical audiences and goes a step above model cards which are limited to a particular model. System Cards look at how other AI subsystems and non-AI subsystems come together to accomplish a task and hence have the potential to provide a bit more insight into how the system is actually working. They’ve demonstrated this through a pilot on Instagram feed ranking while a short technical paper dives into more details.
Why it matters: The pilot actually does a good job (!) of explaining in an accessible fashion how the feed ranking system works in Instagram. The announcement does highlight where they will continue to invest efforts to bridge gaps which includes the need to consider unintended outcomes, addressing the tradeoff between transparency and security, and threading the line between providing adequate technical detail and making it accessible to a non-technical audience. What definitely stands out in this effort is the systems-level approach adopted by Meta in System Cards which is important because no AI system exists in a vacuum; they are of course socio-technical in their interactions with humans and society around them, but even at a technical level, there are many other pieces that come into play above and beyond just the trained model.
Between the lines: I really appreciate the more holistic take that System Cards take compared to the narrower scope of other mechanisms that seek to imbue explicability, but that stems from my own technical background and desire to understand more in-depth as to how the system is functioning. Each of these approaches come with different audiences in mind and the important idea for anyone choosing to incorporate these as a part of their Responsible AI practice is to articulate their needs and goals before picking one or more of these approaches.
AI is on the front lines of the war in Ukraine
What happened: Given that Russian president Vladimir Putin had said in 2017 that whoever controls AI will become the ruler of the world, it is hardly surprising that the battleground in the war on Ukraine has become a testing venue for AI-enabled capabilities. This has manifested in 3 forms: on the battlefield through lethal autonomous weapons systems (LAWS), and off the battlefield in the form of information warfare through the use of things like deepfakes, and collection of open-source intelligence (OSINT) by analyzing at a large-scale videos and exchanges on social media to gain an understanding of on-the-ground troop movements and other aspects of the theatre of war.
Why it matters: While war is horrible, testing of new technologies during a war is not without precedent. Yet, with the scale that AI offers, it creates an asymmetric advantage for the side that is able to wield it well. According to work from the Centre for Naval Analysis, Russia does trail behind China and USA in AI capabilities, but with some signs that China may help to bolster the AI capabilities of Russia in the war, the risks become even more grave. There has been a consistent effort at the UN to issue a ban on LAWS but so far, there isn’t any consensus nor action.
Between the lines: As LAWS become more prominent (hopefully we have action before that becomes the case), we must explore more deeply what steps we can take as the AI community, as policymakers, and concerned citizens so that advanced capabilities arising from AI are not misused to exacerbate harms during an already troubling situation where people are struggling with aggressive action against their homes and communities. The entire lifecycle from research and design all the way to procurement and deployment has some role to play in how these systems ultimately shape the world around us.
What happened: A few emerging trends including government actions such as the upcoming Algorithmic Accountability Act (2022), the EU AI Act, and others around the world, along with rising consumer savviness sparked through the GDPR, and changes to the market landscape with measures such as App Tracking Transparency from Apple are moving the imperative for data privacy from a nice-to-have to a must-have. The article provides a few “rules for data” such as (1) trust over transactions, (2) insights over identity, and (3) flows over silos as guiding actions to allow organizations to harness the benefits of data that they have on their consumers without sacrificing privacy.
Why it matters: Organizations are scrambling to meet the needs that are being set by emerging regulations while still expressing a need to maintain market competitiveness by leveraging the data that they have on their consumers. For example, building trust as a first-order item, and then de-emphasizing the identifying information of individuals, instead focusing on insights that can be derived from aggregated data presents a healthy alternative to the more intrusive manifestations of tracking that happen online today. Techniques such as federated learning allow for data to continue to reside on user’s devices while still providing analytical insights that can be used in decision-making.
Between the lines: An interesting point raised in the article talks about exploring alternative data governance models like data cooperatives whereby data is shared with an organization that is responsible for its stewardship and the custodians of the data then have a duty towards protecting the interests of the users who have shared their data with them while still providing organizations with access to that stored data in a controlled manner so that they can use them for analytical purposes. There is a rich set of literature in the field of alternative data governance models that is worth diving into along with how institutions might be designed to take on the various governance roles required for controlling how data flows and is used by AI systems.
📖 From our Living Dictionary:
“Open Source”
👇 Learn more about why it matters in AI Ethics via our Living Dictionary.
💡 In case you missed it:
Post-Mortem Privacy 2.0: Theory, Law and Technology
Debates surrounding internet privacy have focused mainly on the living, but what happens to our digital lives after we have passed? In this paper, Edina Harbinja offers a theoretical and doctrinal discussion of post-mortem privacy and makes a case for its legal recognition.
To delve deeper, read the full summary here.
Take Action:
What can we do as a part of the AI Ethics community to help those who are suffering in the current war in Ukraine? We’re looking for suggestions for example on tips (actions that we can all take) to prevent the spread of misinformation, better understanding of the deployment of LAWS on the battlefield, etc.