The AI Ethics Brief #47: Governing AI, AI ethics in Canada and Spain, Arts and AI ethics, and more ...

Will computers ever write good novels?

Welcome to another edition of the Montreal AI Ethics Institute’s weekly AI Ethics Brief that will help you keep up with the fast-changing world of AI Ethics! Every week we summarize the best of AI Ethics research and reporting, along with some commentary. More about us at montrealethics.ai/about.

This week’s Brief is a ~12-minute read.


Support our work through Substack

💖 To keep our content free for everyone, we ask those who can, to support us: become a paying subscriber for as little as $5/month. If you’d prefer to make a one-time donation, visit our donation page. We use this Wikipedia-style tipping model to support the democratization of AI ethics literacy and to ensure we can continue to serve our community.

*NOTE: When you hit the subscribe button, you may end up on the Substack page where if you’re not already signed into Substack, you’ll be asked to enter your email address again. Please do so and you’ll be directed to the page where you can purchase the subscription.


This week’s overview:

✍️ What we’re thinking:

  • From the Founder’s Desk: Simple prompts to make the right decisions in AI ethics

  • Governance of AI: Who Is Governing AI Matters Just as Much as How It’s Designed

  • Sociology of AI Ethics: Sociological Perspectives on Artificial Intelligence: A Typological Reading

🔬Research summaries:

  • The Role of Arts in Shaping AI Ethics

  • Post-Mortem Privacy 2.0: Theory, Law and Technology

📅 Event summaries:

  • Abhishek Gupta on AI Ethics at the HBS Tech Conference (Keynote Summary)

  • 10 Takeaways from the State of AI Ethics in Canada & Spain

📰 Article summaries:

  • 6 ways AI can help save the planet (Raconteur)

  • Study shows that federated learning can lead to reduced carbon emissions (VentureBeat)

  • AI Editing will NOT Ruin Photography (PetaPixel)

  • Why computers will never write good novels (Nautilus)


But first, our call-to-action this week:

Register to watch our Learning Community seminars!

Want to audit our learning community seminars and be a fly on the wall? Here’s your chance: we’re live-streaming all of them via Zoom every Wednesday at 5PM, beginning this week. Come hang out and contribute to the discussion via chat.

📅 March 17th – April 28th (every Wednesday)
🕛 5 PM – 6:30 PM EST
🎫 Get free tickets

Register now


✍️ What we’re thinking:

From the Founder’s Desk:

Simple prompts to make the right decisions in AI ethics by Abhishek Gupta

I was constantly bugged with the question: "What if we could make taking the right decisions in AI ethics easier than not?"

Speaking with some HCI experts and practitioners, I came up with the following 3-step methodology that I think will help us address the core concern raised by the above question. My goal with this is to make doing the right thing from an AI ethics perspective the default and easier option.

To delve deeper, read the full article here.

Governance of AI:

Who Is Governing AI Matters Just as Much as How It’s Designed by Muriam Fancy

The evolution of technology is at an exciting but also at an alarming pace. We have seen various cases displaying the implications of deploying technology without having foundational governance mechanisms to mould these technologies’ behaviour and consequences. Just as importantly, we do not have adequate governance mechanisms to protect communities, especially marginalized communities, from the lack of governance and regulation of AI. 

To delve deeper, read the full summary here.

The Sociology of AI Ethics:

Sociological Perspectives on Artificial Intelligence: A Typological Reading

This paper surveys the existing sociological literature on AI. The author provides researchers new to the field with a typology of three analytical categories: scientific AI, technical AI and cultural AI.  The paper argues that these perspectives reflect the development of AI, from a scientific field in the 20th century to its wide-spread commercial applications, and as a socio-cultural phenomenon in the first two decades of the 21st century.

To delve deeper, read the full article here.


🔬 Research summaries:

The Role of Arts in Shaping AI Ethics

Art is an important tool for educating society about our cultural and natural histories. It’s also useful for identifying and solving our present challenges. In this paper, researchers Ramya Srinivasan and Kanji Uchino explain why we should tap into the arts to not only educate society about AI but to also create more ethical systems.

To delve deeper, read the full summary here.

Post-Mortem Privacy 2.0: Theory, Law and Technology

Debates surrounding internet privacy have focused mainly on the living, but what happens to our digital lives after we have passed? In this paper, Edina Harbinja offers a theoretical and doctrinal discussion of post-mortem privacy and makes a case for its legal recognition.

To delve deeper, read the full summary here.


📰 Article summaries:

6 ways AI can help save the planet (Raconteur)

  1. What happened: The SDGs from the UN provide lofty targets for different nations from around the world to achieve their sustainability goals and AI might be able to help with that. The article documents some innovative cases where AI has helped in areas like forest conversation, species tracking, better recycling, monitoring sewage, among others. What was really interesting was the ability of AI systems to replace hitherto manual processes and have an always-on monitoring and alerting system that can lead to more effective outcomes and hence faster achievement of the SDGs. 

  2. Why it matters: Environmental endeavours are typically resource-constrained and if AI systems can help boost the efficacy of efforts that are already underway and provide new avenues for boosting the returns on investment from these programs, we will see a greater degree of interest and better outcomes from these programs. 

  3. Between the lines: However, all of this is not without potential concerns where we might face issues such as the lack of involvement of domain experts (which can lead to inadvertent harms through second-order effects) and shifts in the allocation of funding that can cause long-term damage to already poorly-resourced programs.

Study shows that federated learning can lead to reduced carbon emissions (VentureBeat)

  1. What happened: Researchers analyzed federated learning, where models are trained locally on-device using the data present on the device and share back the learned weights with a central server, as a means of reducing the carbon impact of large-scale machine learning models. They found that in certain cases when the training occurred in regions with lower carbon intensities for electricity generation, this technique yielded lower carbon emissions. 

  2. Why it matters: As machine learning models grow in size, taking into account the environmental impacts that they have is an important consideration. This was also one of the issues that were brought up in the paper authored by Dr. Timnit Gebru that got her fired from Google.

  3. Between the lines: We have covered this in our research work that was presented at several conferences last year. There are limits to the impacts that techniques like federated learning can have, especially when we account for the limited information that we have available on the energy mix used by different regions.

AI Editing will NOT Ruin Photography (PetaPixel)

  1. What happened: Adobe Photoshop announced the inclusion of AI features that offer single-click capabilities for tasks that previously required many different steps and processes on the parts of artists to produce completed works. The creative community has been up in arms debating the ethical implications of what this means and how this will impact the artistic community writ large. 

  2. Why it matters: The creative domain has long been positioned as the final bastion that will hold its own in the onslaught of AI capabilities. Artists’ perception and inclusion of tools that have an impact on their workflow will be an essential consideration as the field changes over the next few years. 

  3. Between the lines: Nuanced views that take into account the actual capabilities and limitations of the tooling and how they impact workflows will be critical in the discussions surrounding how AI will impact the field of art. Especially, listening to those who are practitioners will help us better understand concerns.

Why computers will never write good novels (Nautilus)

  1. What happened: In a peer-reviewed proof that computers cannot read literature, the author of this article lays out in no uncertain terms how the current crop of AI, centred on ALUs and with limited capabilities to perform causal reasoning in the way that humans do, will never be able to produce literature in the way that humans do. In fact, the author through well-laid-out arguments makes the case that the outputs from such systems are merely word soups that don’t yet (and most likely never will) have the power to move our neurons in a way that human-written literature does. 

  2. Why it matters: With the rising capabilities exhibited by systems like GPT-3, we have seen numerous articles that have called the end of the human creative endeavour, pointing to things like AI-generated art, music, articles, etc. but it seems that those are firmly in the realm of rehashing existing content in mostly non-creative ways. 

  3. Between the lines: As we automate away some of the drudgery involved in the creation of written content perhaps, I would argue that the use of AI will enhance the ability of creators to bring into the world even more pieces of creative content by supercharging the workflows of artists.


From our Living Dictionary:

Definition of ‘Open-source’

Open-source software has code that anyone can inspect, modify, and enhance.

👇 Learn more about why it matters in AI ethics via our Living dictionary.

Explore the Living Dictionary!

From elsewhere on the web:

Abhishek Gupta on AI Ethics at the HBS Tech Conference (Keynote Summary)

The AI space has been dominated by controversy in recent months, especially surrounding moves made by Google. Our founder Abhishek Gupta, in his talk at the Harvard Business School Tech Conference aims to explain how the space is to go about affecting the necessary change. From business executives to rank and file employees, there are things everyone can do to take part in this change, but they must be done sooner rather than later.

To delve deeper, read the full summary here.

Event summaries:

10 Takeaways from the State of AI Ethics in Canada & Spain

This event recap was co-written by Muriam Fancy (our Network Engagement Manager) and Connor Wright (our Partnerships Manager), who co-hosted our “The State of AI Ethics in Spain and Canada” virtual meetup in partnership with OdiseIA earlier in February.

To delve deeper, read the full summary here.

Guest post:

If you’ve got an informed opinion on the impact of AI on society, consider writing a guest post for our community — just send your pitch to support@montrealethics.ai. You can pitch us an idea before you write, or a completed draft.


Take Action:

Events:

Register for The State of AI Ethics Panel!

What's next for AI Ethics in 2021? And what is the broader historical significance of the mistreatment of Dr. Timnit Gebru by Google? Hear from a world-class panel, including:

  • Danielle Wood — Assistant Professor in the Program in Media Arts & Sciences, MIT (@space_enabled)

  • Katlyn M Turner — Research Scientist, MIT Media Lab (@katlynmturner)

  • Catherine D’Ignazio — Assistant Professor of Urban Science and Planning in the Department of Urban Studies and Planning, MIT (@kanarinka)

  • Victoria Heath (Moderator) — Associate Director of Governance & Strategy, Montreal AI Ethics Institute (@victoria_heath7)

  • Abhishek Gupta - Founder, Montreal AI Ethics Institute (@atg_abhishek)

📅 March 24th (Wednesday)
🕛12 PM - 1:30 PM EST
🎫 Get free tickets