AI Ethics Brief #81: Data utility and confidentiality, China's AI ethics code, missing geofence warrants, and more ...

Why should sustainability be a first-class consideration for AI systems?

Welcome to another edition of the Montreal AI Ethics Institute’s weekly AI Ethics Brief that will help you keep up with the fast-changing world of AI Ethics! Every week we summarize the best of AI Ethics research and reporting, along with some commentary. More about us at montrealethics.ai/about.

⏰ This week’s Brief is a ~23-minute read.


Support our work through Substack

💖 To keep our content free for everyone, we ask those who can, to support us: become a paying subscriber for as little as $5/month. If you’d prefer to make a one-time donation, visit our donation page. We use this Wikipedia-style tipping model to support the democratization of AI ethics literacy and to ensure we can continue to serve our community.

*NOTE: When you hit the subscribe button, you may end up on the Substack page where if you’re not already signed into Substack, you’ll be asked to enter your email address again. Please do so and you’ll be directed to the page where you can purchase the subscription.


This week’s overview:

✍️ What we’re thinking:

  • Why should sustainability be a first-class consideration for AI systems?

  • AI Ethics: Enter the Dragon!

  • Analysis of the “Artificial Intelligence governance principles: towards ethical and trustworthy artificial intelligence in the European insurance sector”

🔬 Research summaries:

  • Trustworthiness of Artificial Intelligence

  • Balancing Data Utility and Confidentiality in the 2020 US Census

📰 Article summaries:

  • The Metaverse Is Mark Zuckerberg’s Mobile Do-Over

  • Thousands of Geofence Warrants Appear to Be Missing from a California DOJ Transparency Database

  • Current AI Practices Could Be Enabling a New Generation of Copyright Trolls

📅 Event

  • AI and Space

📖 Living Dictionary:

  • Ethics Washing

🌐 From elsewhere on the web:

  • Your hardest questions in Green AI answered: A conversation series with experts

  • Digital Rights, Data, and Technology

  • Alternative AI Futures?

  • The Montreal Integrity Network: Workshop on the Ethics of Artificial Intelligence

  • McGill AI Society: Panel discussion on AI Ethics

💡 ICYMI

  • Examining the Black Box: Tools for Assessing Algorithmic Systems


But first, our call-to-action this week:

​In collaboration with the Institute for Future Research in Turkey and with the support from the Canadian Embassy in Istanbul, MAIEI presents a discussion on AI and Space.

​The session will include two presentations from experts from the Institute for Future Research. We will then split into two breakout rooms, with one based on “AI and Space Ethics for Human Life” and the other on “AI and Space Ethics for Technology.”, before concluding our discussion with some closing remarks.

​The readings that will form part of the background reading for the event will be emailed to participants shortly!

​We look forward to seeing you there!

Register here!


✍️ What we’re thinking:

Why should sustainability be a first-class consideration for AI systems?

Data scientists, machine learning engineers, and other technical stakeholders involved in the AI lifecycle are very familiar with business and functional considerations to guide the design, development, and deployment of AI systems. But, should sustainability be made an equal first-class citizen in that list of considerations? 

Yes! Particularly because it has implications towards both the environment and societal implications of AI systems. 

Incorporating sustainability in AI can allow us to (1) achieve social justice when we utilize this approach, and (2) especially so when these systems operate in an inherently socio-technical context. Indeed, a harmonized approach accounting for both societal and environmental considerations in the design, development, and deployment of AI systems can lead us to gains that support the triple bottom line: profits, people, and planet.

To delve deeper, read the full article here.

AI Ethics: Enter the Dragon!

On September 25, 2021, the National New Generation Artificial Intelligence Governance Professional Committee issued the “New Generation of Artificial Intelligence Code of Ethics” (hereinafter “the Code”). According to the Code, its aim is to “integrate ethics into the entire life cycle of artificial intelligence, and to engage in artificial intelligence related activities”.

To delve deeper, read the full article here.

Analysis of the “Artificial Intelligence governance principles: towards ethical and trustworthy artificial intelligence in the European insurance sector”

After the 2020 White Paper on Artificial Intelligence and the Proposal for a new regulation on AI of 21 April 2021 published by the European Commission in April 2021, the European Insurance and Occupational Pensions Authority (« EIOPA ») published, on 18 June 2021, a report towards ethical and trustworthy artificial intelligence in the European insurance sector. This is the first AI EU regulation of insurance. The report is the result of the intensive work of EIOPA’s Consultative Expert Group on Digital Ethics in insurance. The document aims in particular to help insurance companies when they implement AI applications/systems. The measures proposed in this document are risk-based and cover the entire lifecycle of an AI application.

To delve deeper, read the full article here.


🔬 Research summaries:

Trustworthiness of Artificial Intelligence

If you are new to the space of AI Ethics, this is the paper for you. Offering a wide coverage of the issues that enter into the debate, AI governance and how we build trustworthy AI are explored by the authors.

To delve deeper, read the full summary here.

Balancing Data Utility and Confidentiality in the 2020 US Census

Due to advancements in computational power and the increased availability of commercial data, the traditional privacy protections used by the U.S. Census Bureau are no longer effective in preventing the mass reconstruction and reidentification of confidential data. In this paper, danah boyd explores the bureau’s response for the 2020 Census: a new disclosure avoidance system called “differential privacy,” which creates a mathematical trade-off between data utility and privacy. But the opaque manner in which the bureau has rolled out the changes has risked undermining trust between the bureau and the diverse stakeholders who use Census data in policymaking, research, and advocacy.

To delve deeper, read the full summary here.


📰 Article summaries:

The Metaverse Is Mark Zuckerberg’s Mobile Do-Over

  1. What happened: The metaverse has taken the world of tech-related discussions by storm ever since the rebranding announcement from Facebook becoming Meta. The article dives into the details of previous attempts by Meta in trying to establish itself in the leagues of other companies that have a stronger grip on the underlying infrastructure and plumbing that enables us to enjoy all the apps and other services built on top of them, so things like OSes, devices, and platforms (which is its domain for now). The article examines past endeavors from the companies in trying to introduce mobile OS, a Facebook phone, the Facebook Home that was supposed to become the central thing on your phone, and finally what they’re trying to achieve with their vision for the metaverse and how they are approaching it.

  2. Why it matters: If the metaverse is something that takes hold (though some argue that we are already in a metaverse with all the other online activities that we are engaged in and how we define the metaverse in the first place!), Meta argues that it will only become successful if it involves open standards and other companies providing services and solutions that can all plug into a single ecosystem. Of course, there are undercurrents to this approach in that Meta would be delighted if it is based on their vision and platform + infrastructure that would make them a central player in this future if it comes to pass. 

  3. Between the lines: With all the scrutiny that the company has faced in the US, the rebranding and moving away from the social media platform to something more nebulous like the metaverse might seem like a mechanism for drawing away attention. But, as technology becomes more ubiquitous, and the possibility of realizing the metaverse, at least in the form that Meta imagines it, becomes more likely, this is a good call for our community to start thinking about what the ethical consequences of this might be so that we are prepared and respond proactively. 

Thousands of Geofence Warrants Appear to Be Missing from a California DOJ Transparency Database

  1. What happened: Investigation by the publication The Markup found discrepancies in the number of geofence warrants that were reported in the public database from the California DOJ and the number of requests that Google received for geofences. The article reports that such discrepancies might arise because of requests being revised during the warrant granting process, the lack of standards in filing, lack of entering this information in the database, sealed warrants where the information is not filed in the public database, and how the information is captured and reported when the requests are made by agencies that are outside of the state. 

  2. Why it matters: But, all of these pose significant challenges to those who might want to challenge unlawful warrants, especially civil society agencies that keep track of these from public databases. These challenges also defeat the efficacy of transparency requirements and laws like California Electronic Communications Privacy Act. Geofences by their very nature don’t have a specific target individual, and hence can be quite invasive, especially scooping up data about a bunch of unrelated individuals who happen to be in the area that the geofence targets. This stands in contrast to wiretaps where the warrants are highly targeted. 

  3. Between the lines: This is a great demonstration of how even when we have laws and transparency requirements, the way they are enforced and the reporting standards can make or break whether we actually get the results that they set out to achieve. Standardization in reporting and more stringent requirements placed on the agencies seeking these geofence warrants can help alleviate some of the challenges identified in the article. 

Current AI Practices Could Be Enabling a New Generation of Copyright Trolls

  1. What happened: In a study conducted by researchers from Huawei, they discovered that for the 6 most common datasets used by them in training their AI models, most of them posed significant legal challenges when it comes to commercial use. Specifically, challenges included things like what kind of licenses the models needed to be released under since they constituted derived work, whether commercialization was even possible, and the legal liabilities in case claims were made by anyone affected by adverse outcomes from the use of those models. In several of those datasets, there were challenges in tracing the lineages of the licenses that would be applicable given that they were curated and scraped datasets rather than original data gathering. In addition, most also come with auto-indemnification for the original authors of those datasets, placing the liability onto those who use them in building their models.  

  2. Why it matters: Given the push towards large models, which in the current paradigm of supervised learning mean the consumption of large datasets for training, the use of such datasets and their legal implications pose challenges if the current legal landscape evolves towards something stricter whereby such violations are pursued more stringently. The reasons identified in the article and paper that allow for such violations to pass unaddressed is that there is a laissez-faire and caveat-emptor approach, at least in the US. But, should that change or as is the case in other jurisdictions, where such activities are firmly disallowed, these violations will have to be tackled head-on rather than leaving them nebulously unaddressed. 

  3. Between the lines: Large public datasets have been the bedrock upon which powerful models have been built in the modern era of AI. But, as has been showcased over the past 18-24 months, a lot of these datasets come with challenges in terms of not only biases, but also how that data was collected, often without consent. A deep-dive into the licensing lineage, as done by the paper cited in this article showcases that there are numerous challenges that are yet to be solved, especially as the regulatory regime stiffens with respect to the use of data in AI systems. This might also have implications for how AI systems are imported and exported if there are differences in the regulatory requirements across different jurisdictions. 


📖 From our Living Dictionary:

“Ethics washing”

👇 Learn more about why it matters in AI Ethics via our Living Dictionary.

Explore the Living Dictionary!

🌐 From elsewhere on the web:

Your hardest questions in Green AI answered: A conversation series with experts

Our founder, Abhishek Gupta, is hosting 3 sessions at the Microsoft Machine Learning and Data Science Conference on “Your hardest questions in Green AI answered: A conversation series with experts” with the goal of elevating environmental considerations alongside business and functional requirements in the design, development, and deployment of AI systems.

Digital Rights, Data, and Technology

Our founder, Abhishek Gupta, is speaking at this conference hosted by Mission Capital to answer “What does your organization need to be thinking about as it navigates digital rights, data privacy, and the expansion of online services?”

Register here!

Alternative AI Futures?

Our founder, Abhishek Gupta, is hosting a conversation diving into the details exploring the idea of alternative AI futures.

In view of the dominant role played by large corporate groups in the development and distribution of AI processes and reports about increasingly efficient facial recognition systems used by political power holders for surveillance purposes, you might jump to the conclusion that artificial intelligence is not much more than an instrument of control for political and economic purposes. But what can AI be apart from that? This panel looks at some alternative application examples using AI in community and creative projects. What opportunities might be offered by AI collaborations that aren’t focused on profit and power interests? What new ideas – involving AI as well as the people who interact with it – might emerge as a result, and what new worlds could become conceivable?

Register here!

The Montreal Integrity Network: Workshop on the Ethics of Artificial Intelligence

Our Partnerships Manager, Connor Wright, is speaking at a session hosted by the Montreal Integrity Network in an enriching discussion about AI applications and their ethical considerations in a corporate environment. They’ll cover the following topics: What is AI, and what it is not, the current state of AI, and future trends, and ethical challenges of AI applications.

McGill AI Society: Panel discussion on AI Ethics

Our Business Development Manager, Masa Sweidan, is speaking at a session hosted by the McGill AI Society to undergraduate and graduate students at McGill University. The idea is to give the students multiple perspectives on the importance of ethical AI, present the current real challenges, and bring them to reflect on the ethical issues that arise within AI systems.

💡 In case you missed it:

Examining the Black Box: Tools for Assessing Algorithmic Systems

The paper clarifies what assessment in algorithmic systems can look like, including when assessment activities are carried out, who needs to be involved, the pieces being evaluated, and the maturity of the techniques. It also explains key terms used in the field and identifies the gaps in the current methods as they relate to the factors mentioned above.

To delve deeper, read the full summary here.


Take Action:

AI and Space

​In collaboration with the Institute for Future Research in Turkey and with the support from the Canadian Embassy in Istanbul, MAIEI presents a discussion on AI and Space.

​The session will include two presentations from experts from the Institute for Future Research. We will then split into two breakout rooms, with one based on “AI and Space Ethics for Human Life” and the other on “AI and Space Ethics for Technology.”, before concluding our discussion with some closing remarks.

​The readings that will form part of the background reading for the event will be emailed to participants shortly!

​We look forward to seeing you there!

Register here!