The AI Ethics Brief #46: State of AI Ethics Panel, Cold Hard Data, Responsible AI in India, and more ...

Why getting transparent about your AI ethics methodology is a good thing

Welcome to another edition of the Montreal AI Ethics Institute’s weekly AI Ethics Brief that will help you keep up with the fast-changing world of AI Ethics! Every week we summarize the best of AI Ethics research and reporting, along with some commentary. More about us at montrealethics.ai/about.

This week’s Brief is a ~14-minute read.


Support our work through Substack

💖 To keep our content free for everyone, we ask those who can, to support us: become a paying subscriber for as little as $5/month. If you’d prefer to make a one-time donation, visit our donation page. We use this Wikipedia-style tipping model to support the democratization of AI ethics literacy and to ensure we can continue to serve our community.

*NOTE: When you hit the subscribe button, you may end up on the Substack page where if you’re not already signed into Substack, you’ll be asked to enter your email address again. Please do so and you’ll be directed to the page where you can purchase the subscription.

Apply now


This week’s overview:

✍️ What we’re thinking:

  • From the Founder’s Desk: Get transparent about your AI ethics methodology

  • The Sociology of AI Ethics: “Cold Hard Data” – Nothing Cold or Hard About It (Research Summary)

  • Business column: If It’s Free, You’re the Product: The New Normal in a Surveillance Economy

🔬 Research summaries:

  • Responsible AI #AIForAll : Approach Document for India - Part 1: Principles for Responsible AI

  • Technology on the Margins: AI and Global Migration Management From a Human Rights Perspective

📰 Article summaries:

  • Biden should double down on Trump’s policy of promoting AI within government (VentureBeat)

  • The AI Research Paper Was Real. The ‘Coauthor’ Wasn't (Wired)

  • Looking For An AI Ethicist? Good Luck (Datanami)

  • On Clubhouse, Saudis are speaking freely. For now. (Rest of World)


But first, our call-to-action this week:

Register for The State of AI Ethics Panel!

What's next for AI Ethics in 2021? And what is the broader historical significance of the mistreatment of Dr. Timnit Gebru by Google? Hear from a world-class panel, including:

  • Danielle Wood — Assistant Professor in the Program in Media Arts & Sciences, MIT (@space_enabled)

  • Katlyn M Turner — Research Scientist, MIT Media Lab (@katlynmturner)

  • Catherine D’Ignazio — Assistant Professor of Urban Science and Planning in the Department of Urban Studies and Planning, MIT (@kanarinka)

  • Victoria Heath (Moderator) — Associate Director of Governance & Strategy, Montreal AI Ethics Institute (@victoria_heath7)

  • Abhishek Gupta - Founder and Principal Researcher, Montreal AI Ethics Institute (@atg_abhishek)

📅 March 24th (Wednesday)
🕛12 PM - 1:30 PM EST
🎫 Get free tickets

Get free tickets


✍️ What we’re thinking:

From the Founder’s Desk:

Get transparent about your AI ethics methodology by Abhishek Gupta

So you've heard about AI ethics and its importance in building AI systems that are aligned with societal values. There might be many reasons why you choose to embark on this journey of incorporating responsible AI principles into your design, development, and deployment phase. But, we've talked about those before here and you'll find tons of literature elsewhere that articulates why you should be doing it.

I want to take a few moments and talk about how you should be doing it.I want to zero in on one aspect of that: transparency on your AI ethics methodology.

To delve deeper, read the full piece here.

The Sociology of AI Ethics:

“Cold Hard Data” – Nothing Cold or Hard About It

Neff and co-authors, through participant observation, examine how academic data science teams grapple with the ethical and social challenges of their work. The authors address data science critiques common in critical data studies and show how these concerns are manifest in the day-to-day practices of data science practitioners. They conclude by encouraging social science and humanities scholars and data scientists to work more closely together for the benefit of each discipline and to produce more ethical ways of knowing.

To delve deeper, read the full piece here.

Business column:

If It’s Free, You’re the Product: The New Normal in a Surveillance Economy by Masa Sweidan

The emergence of new financial models in the digital space, ranging from subscriptions to sponsored content, continues to shape much of our experience with major tech companies and platforms. In particular, advertising technology now fuels many of the free services we use and has fundamentally disrupted the way in which companies develop their financial sustainability.

To delve deeper, read the full piece here.


🔬 Research summaries:

Responsible AI #AIForAll : Approach Document for India - Part 1: Principles for Responsible AI by Anna Roy, Rohit Satish, and Tanay Mahindru

Given the release of the national strategy for AI from the Indian Government a couple of years ago, this paper articulates principles for building Responsible AI systems. It takes an approach that is rooted in the Indian Constitution to link the principles to concrete segments of the law that provide a firm mandate for the adoption of these principles. It draws from similar efforts from around the globe while ensuring that the principles are somewhat tailored to the Indian context.

To delve deeper, read the full summary here.

Technology on the Margins: AI and Global Migration Management From a Human Rights Perspective by Petra Molnar

The deployment of technology like AI on migrant communities is unregulated and undocumented and therefore becomes a means to test such technology. This paper examines that the lack of regulation protecting migrant communities from the deployment of these technologies allows for human rights violations to exercise control over these communities and their ability to migrate.

To delve deeper, read the full summary here.


📰 Article summaries:

Biden should double down on Trump’s policy of promoting AI within government (VentureBeat)

  • What happened: As the Biden administration gets off the ground, there have been a lot of Trump-era policies that have been overturned but the authors in this article call for maintaining the AI policy that was created in the previous administration. 

  • Why it matters: It presents a unique opportunity to continue some of the good work that was done in terms of making policies machine-readable, making datasets open, and having separate policies governing AI rather than lumping it with other technologies which don’t share peculiarities with some of the harms arising from the misuse of AI.  

  • Between the lines: AI has become a very hot-button issue and having clarity from the top-down will help to create precedents that will move the entire ecosystem towards a healthier stance and set the foundations for initiatives like the Algorithmic Accountability Act to succeed.

The AI Research Paper Was Real. The ‘Coauthor’ Wasn't (Wired)

  • What happened: An MIT professor’s name and likeness were used to publish a couple of papers which that professor discovered accidentally as he was browsing some research work. The journals since retracted the papers but Wired also found another paper where the offending researchers included another fake persona. This paper was also retracted after being reported. 

  • Why it matters: In a field where things move very fast and with the rising use of preprint servers like arXiv, it is becoming harder to distinguish between real and faux research work. In a system that relies heavily on trust and integrity, such incidents can undermine legitimate work as well, especially in cases where publishers and authors are unfamiliar with best practices. 

  • Between the lines: As we have more scholars entering the fast-moving field of AI, awareness of such pitfalls and learning the implicit and explicit code of conduct in research is going to be essential. In addition, being able to distinguish meaningful from pseudo-research will also be essential if such trends continue.

Looking For An AI Ethicist? Good Luck (Datanami)

  • What happened: With the rising prominence of concerns relating to AI ethics, organizations are rushing to hire people in roles of AI ethics but the skill sets required continue to be elusive to find in a single individual. The article provides a suggestion of treating this as a team sport where a combined set of individuals provide the requisite skills to fulfill this role of an AI ethicist along with having someone in the C-suite overseeing that responsibility for the organization as well. 

  • Why it matters: The paucity of people with the right set of skills might slow down the adoption of measures in responsible AI and finding novel approaches to bridge the gaps as the upskilling happens across the industry is a good way to go about meeting these challenges head-on. 

On Clubhouse, Saudis are speaking freely. For now. (Rest of World)

  • What happened: Conversations that were typically taboo to be discussed in the open or even amongst friends in Saudi Arabia are now being had openly on Clubhouse. The author points out in the article that this is highly unusual since people aren’t anonymous on the platform.

  • Why it matters: While the platform does provide the freedom to openly discuss taboo topics in a fresh format, the empowerment comes at the risk of facing persecution if exposed. The platform does prohibit recording without written consent, but after some participants in rooms discussing politics and censorship were threatened with having screenshots that would be posted to Twitter, the rooms were promptly shut down. 

  • Between the lines: Unlike Iran and China where apps can be banned, Saudi Arabia chooses to surveil rather than outrightly remove an app from the marketplace, and people are already engaging in self-censorship and prudence to avoid having to face persecution. This may or may not be an outcome that is worse than an outright ban since it limits expression on the platform and might misrepresent the Overton window for the gamut of issues that Saudi Arabians care about. 


From elsewhere on the web:

Final Report by The National Security Commission on Artificial Intelligence (NSCAI)

This Final Report presents the NSCAI’s strategy for winning the artificial intelligence era. The 16 chapters in the Main Report provide topline conclusions and recommendations. The accompanying Blueprints for Action outline more detailed steps that the U.S. Government should take to implement the recommendations.

Guest post:

If you’ve got an informed opinion on the impact of AI on society, consider writing a guest post for our community — just send your pitch to support@montrealethics.ai.


Take Action:

Events:

Register for The State of AI Ethics Panel!

What's next for AI Ethics in 2021? And what is the broader historical significance of the mistreatment of Dr. Timnit Gebru by Google? Hear from a world-class panel, including:

  • Danielle Wood — Assistant Professor in the Program in Media Arts & Sciences, MIT (@space_enabled)

  • Katlyn M Turner — Research Scientist, MIT Media Lab (@katlynmturner)

  • Catherine D’Ignazio — Assistant Professor of Urban Science and Planning in the Department of Urban Studies and Planning, MIT (@kanarinka)

  • Victoria Heath (Moderator) — Associate Director of Governance & Strategy, Montreal AI Ethics Institute (@victoria_heath7)

  • Abhishek Gupta - Founder and Principal Researcher, Montreal AI Ethics Institute (@atg_abhishek)

📅 March 24th (Wednesday)
🕛12 PM - 1:30 PM EST
🎫 Get free tickets

Register to watch our Learning Community seminars (Live Series)

We're livestreaming our weekly AI Ethics Learning Community seminars! Our goal is to bring together a cohort of motivated individuals to discuss important research while allowing our wider community to learn by watching them and participating in the Zoom chat room.

📅 First session on March 17th, repeating on Wednesdays
🕔 5 PM – 6:30 PM EST
🎫 Get free tickets (and meet the cohort!)


Nominate underrecognized people in AI ethics to be featured in our next report!

We are inviting the AI ethics community to nominate researchers, practitioners, advocates, and community members in the domain of AI ethics to be featured in our upcoming State of AI Ethics report.

There is often great work being done in different parts of the world that does not get the attention it deserves due to the state of our information ecosystem and the manner in which platforms surface content. We would like to break that mold and shed some light on the valuable work being done by talented people.

Nominate now