The AI Ethics Brief #60: Principles to Practices Gap, AI and fashion piracy, creepy fake humans, and more ...
Why has censorship become the new crisis for social media networks?
Welcome to another edition of the Montreal AI Ethics Institute’s weekly AI Ethics Brief that will help you keep up with the fast-changing world of AI Ethics! Every week we summarize the best of AI Ethics research and reporting, along with some commentary. More about us at montrealethics.ai/about.
⏰ This week’s Brief is a ~12-minute read.
Support our work through Substack
💖 To keep our content free for everyone, we ask those who can, to support us: become a paying subscriber for as little as $5/month. If you’d prefer to make a one-time donation, visit our donation page. We use this Wikipedia-style tipping model to support the democratization of AI ethics literacy and to ensure we can continue to serve our community.
*NOTE: When you hit the subscribe button, you may end up on the Substack page where if you’re not already signed into Substack, you’ll be asked to enter your email address again. Please do so and you’ll be directed to the page where you can purchase the subscription.
This week’s overview:
🔬 Research summaries:
Explaining the Principles to Practices Gap in AI
Fashion piracy and artificial intelligence—does the new creative environment come with new copyright issues?
📰 Article summaries:
How censorship became the new crisis for social networks (Platformer)
10 steps to educate your company on AI fairness (World Economic Forum)
These creepy fake humans herald a new age in AI (MIT Tech Review)
Across China, AI ‘city brains’ are changing how the government runs (South China Morning Post)
But first, our call-to-action this week:
Meetup: The Challenges of AI Ethics in Different National Contexts (June 30th)
The Montreal AI Ethics Institute is partnering with I2AI to host a discussion about the different challenges that national contexts can present when trying to implement AI ethics. For example, different cultural views, differing levels of AI literacy and differing levels of resources available to dedicate to such implementation.
📅 June 30th (Wednesday)
🕛 11:00AM – 12:30PM EST
🎫 Get free tickets
🔬 Research summaries:
Explaining the Principles to Practices Gap in AI
As many principles permeate the development of AI to guide it into ethical, safe, and inclusive outcomes, we face a challenge. There is a significant gap in their implementation in practice. This paper outlines some potential causes for this challenge in corporations: misalignment of incentives, the complexity of AI’s impacts, disciplinary divide, organizational distribution of responsibilities, governance of knowledge, and challenges with identifying best practices. It concludes with a set of recommendations on how we can address these challenges.
To delve deeper, read the full summary here.
With AI becoming more and more independent from the human touch as time rolls on, many questions surrounding whether AI can be a designer, copyright protection and its links with creativity are being asked. Such questions within the fashion context, an industry notoriously difficult to introduce a legal copyright framework, then only get more interesting. In this sense, this paper fittingly asks; can AI actually be original?
To delve deeper, read the full summary here.
📰 Article summaries:
How censorship became the new crisis for social networks (Platformer)
What happened: While previously the biggest concerns facing social media platforms were that they were leaving up problematic content, now the pendulum has swung the other way - they are taking down too much content, sometimes under pressure from the local governments in the places where they are operating. Trust between users and corporations has deteriorated to the point that even seemingly business-motivated moves like how to rank stories vs. native posts on Instagram are seen as political moves as it disproportionately affects activists who use this as a way of drawing attention to their causes.
Why it matters: This tussle between social media platforms, users, and governments will continue to oscillate if we don’t get clearer guidance on how to operate in a way that respects the rights of people while keeping up a healthy information ecosystem. The article makes some interesting arguments on the role that the platforms played in the rise of authoritarianism and how that subsequently was used by the very same people to raise even more concerns.
Between the lines: In work that the Montreal AI Ethics Institute did on online content moderation, we lay out a lot of guidance expanding on the work done by the Santa Clara Principles as a way to tackle some of these thorny issues. Most importantly, our emphasis on taking a data-driven approach to assess the impacts of each of these decisions and sharing those transparently with a larger audience has been at the heart of those recommendations.
10 steps to educate your company on AI fairness (World Economic Forum)
What happened: With the large gap between principles and practice, the WEF convened a group of experts to propose a set of practical interventions to make Responsible AI a reality. The recommendations notably include: assigning the role of Responsible AI education in the organization to a Chief AI Ethics Officer. Our team covered this role a few weeks ago here. Clear communication of the AI ethics approach is another recommendation that resonated well with us, something that we covered at the start of 2021. Finally, the inclusion of a “learn-by-doing” approach was a welcome change from other recommendations made in numerous documents and guidelines because we are still not sure which, if any, of these approaches are going to work well and this requires experimentation and documentation of the results.
Why it matters: This is a great shift from a high-level organization to change the pace and direction of the current discourse in AI ethics towards something that is a bit more solution-oriented rather than just identification of problems which has been the dominant focus of the conversations over the last couple of years.
Between the lines: While the recommendations do give some actionable advice, for the most part, they are still quite abstract and require a lot more empirical evidence and consultation with people who have actually implemented some of these ideas and others in their organizations. Reading them as they are presented at the moment still leaves much to be desired, partially undercutting the emphasis that they want to make on them being practical interventions.
These creepy fake humans herald a new age in AI (MIT Tech Review)
What happened: As regulations tighten around data-use around the world, understandably, there are some organizations that are choosing to use synthetic data as an alternative to “messy, real-world data”. The article documents the work of some of the companies that provide synthetic data to other organizations based on tailored needs. While there are benefits of having such curated data, the article makes the case that it is not all roses because there are some shortcomings that hinder their efficacy and also compromise their promises in terms of mitigating biases and protecting privacy.
Why it matters: Discussions around the use of synthetic data have always been explored when one finds themselves short of data to train specific ML systems that require large amounts of data to function well. But, recent conversations including this one add a socio-technical lens which is much needed if (and we suspect that this will be the case) synthetic data becomes more commonplace in the development of modern AI systems.
Between the lines: Something that needs a bit more analysis and perhaps something that is worthy of a short research endeavour (feel free to reach out to Abhishek who is working on this!) is how some guidelines and standards can be established in the use of synthetic data to build AI systems on synthetic data that meet our needs of mitigating bias and protecting privacy amongst addressing other ethical concerns.
Across China, AI ‘city brains’ are changing how the government runs (South China Morning Post)
What happened: The idea of smart city has been around for a long time but now with the ability to access large-scale storage for data, easy deployment of sensors, and of course AI to analyze the reams of data, cities are experimenting with the idea of a “city brain”. The article gives examples of a few cities in China that have attempted to use this approach. In particular, Hangzhou has a city brain put in place by Alibaba that has led to some benefits like more efficient traffic routing. In other cities, the use of this approach has been deployed with the intention of improving the efficacy of bureaucratic processes like power consumption management.
Why it matters: These approaches are not without their costs. In particular, there are significant financial costs to deploy the sensor infrastructure, data storage, and compute capabilities which cost approximately 1 billion yuan in Hangzhou. In addition, there are significant costs in terms of privacy and surveillance that are not welcome by both everyday citizens and some city officials as well. This also has the potential to reshape the urban landscape as people start to alter their behaviour to preserve their privacy and freedom.
Between the lines: The success of such programs needs to be squared against the costs that they impose on the people inhabiting the cities. Unfortunately, some of these changes are macro and slow-moving and will only become evident in the long-term during which there might be a regime change in the city administration. Being more proactive and deliberate will be important in how these technologies are deployed. In particular, working with the residents of the city is crucial to get this right, in a way that brings them benefits where they feel the city is currently lacking.
From elsewhere on the web:
The current state of affairs and a roadmap for effective carbon-accounting tooling in AI (Microsoft Devblogs)
Digital services consume a lot of energy and it goes without saying that in a world with accelerating climate change, we must be conscious in all parts of life with our carbon footprints. In the case of the software that we write, specifically, the AI systems we build, these considerations become even more important because of the large upfront computational resources that training some large AI models consume, and the subsequent carbon emissions resulting from it. Thus, effective carbon accounting for artificial intelligence systems is critical!
To delve deeper, read the full article here.
In case you missed it:
Mapping the Ethicality of Algorithmic Pricing
Pricing algorithms can predict an individual’s willingness to buy and adjust the price in real-time to maximize overall revenue. Both dynamic pricing (based on market factors like supply and demand), and personalized pricing (based on individual behaviour) pose significant ethical challenges, especially around consumer privacy.
To delve deeper, read the full report here.
Take Action:
Events:
The Challenges of AI Ethics in Different National Contexts (June 30th)
The Montreal AI Ethics Institute is partnering with I2AI to host a discussion about the different challenges that national contexts can present when trying to implement AI ethics. For example, different cultural views, differing levels of AI literacy and differing levels of resources available to dedicate to such implementation.
📅 June 30th (Wednesday)
🕛 11:00AM – 12:30PM EST
🎫 Get free tickets
Sustainable AI Conference (June 15-17, 2021)
The aim of this Sustainable AI conference is to get researchers talking about the environmental, social and economic costs of designing, developing, and using AI. The discussion will be directed at exploring: the normative grounding of the value of sustainability; the strength of the concept of sustainability; how to measure environmental costs of AI; understanding the intergenerational impacts of AI; and, informing public policy guidelines for the green, proportionate and sustainable development and use of AI.
Among the speakers is our founder Abhishek Gupta, whose talk is titled “How do we get to more sustainable AI systems? An analysis and roadmap for the community”
📅 June 15-17, 2021
🎫 Learn more & register here