AI Ethics #31: Political power of platforms, hazard contribution modes, and breaking neural networks.

Did you know about the potential of using synthetic data for building more ethical AI systems?

Welcome to another edition of our weekly newsletter that will help you navigate the fast-changing world of AI Ethics! Every week we summarize the best of AI Ethics research and reporting, along with some commentary. More about us at

Photo by Leandro Mazzuquini on Unsplash

This week’s overview:

Research summaries:

  • Hazard Contribution Modes of Machine Learning Components

  • The Political Power of Platforms: How Current Attempts to Regulate Misinformation Amplify Opinion Power

Guest contribution:

  • Breaking Your Neural Network with Adversarial Examples

Article summaries:

  • The real promise of synthetic data

  • A Buyer’s Guide to AI in Health and Care

  • India's personal data privacy law triggers surveillance fears

  • Data audit of UK political parties finds laundry list of failings

  • India’s internet shutdowns function like ‘invisibility cloaks’

  • Selfies and Sharia police

But first, our call-to-action of the week: The State of AI Ethics (panel discussion), hosted by us!

Following up on our last report, we decided to invite some of the contributors back for a virtual panel discussion. There will be time for audience Q&A!

Sign up here.

  • Topic: 2020 in review, from an AI Ethics perspective

  • Speakers: Rumman Chowdhury (Accenture), Danit Gal (United Nations), Katya Klinova (Partnership on AI), Amba Kak (NYU’s AI Now Institute), Abhishek Gupta (Montreal AI Ethics Institute). Moderated by Victoria Heath (Montreal AI Ethics Institute).

  • Date: Wednesday, December 2nd from 12:30 PM EST – 2:00 PM EST

  • Free tickets via Eventbrite: here!

Research summaries:

Hazard Contribution Modes of Machine Learning Components by Colin Smith, Ewen Denney, Ganesh Pai

This paper provides a categorization framework for assessing the safety posture of a system that consists of embedded machine learning components. It additionally ties that in with a safety assurance reasoning scheme that helps to provide justifiable and demonstrable mechanisms for proving the safety of the system.

To delve deeper, read our full summary here.

The Political Power of Platforms: How Current Attempts to Regulate Misinformation Amplify Opinion Power by Natali Helberger

Tackling the latest European regulations on online misinformation, Helberg challenges the current approach of viewing digital media platforms as informational ‘intermediaries’ and offers the concept of opinion power to demonstrate how social media platforms are becoming “governors of public opinion“.

To delve deeper, read our full summary here.

Article summaries:

The real promise of synthetic data (MIT News)

Synthetic data certainly has been talked about on and off for the last few years when it comes to privacy protection and enabling research and model development in machine learning without having to sacrifice the rights of people, or doing research in the face of deeply siloed datasets, due to legislative or technical reasons. An apt comparison made in this article talks about the requirements for what makes a good synthetic dataset: it should be like diet soda in that it has all the qualities (appropriate correlations, structure, richness, diversity, etc.) with none of the calories (resemblance to real people for example that can compromise their privacy). For a long time, there were many disparate approaches to synthetic data generation and the work featured in this article called Synthetic Data Vault provides a handy toolkit to bring some of those techniques together under one roof. 

An additional contribution made by the authors of this research is that they have provided constraints to manage the creation of the synthetic data: things that might not be explicitly captured in the statistical relations but are important nonetheless. For example, the landing time of a flight occurring after the takeoff time. This leads to datasets that are ultimately more realistic in their representation of the real-world data and hence more usable which will hopefully enhance the uptake of synthetic data over time.

A Buyer’s Guide to AI in Health and Care (NHSX)

The public sector gets quite a bit of flak when it comes to their procurement practices of AI solutions. This handy guide from the NHSX provides a few action items that officers in government and other public sector entities can utilize to make the integration of AI into their organizations better. A key consideration is to think about what problem one is trying to solve with the use of AI, and if it is really necessary or if non-AI automation methods could work instead. Related to this point is the availability of data, without which training an AI system would not be possible. This can be a problem in cases where the data is fragmented across different departments or units and poses a challenge for implementation.

Compliance with standards like NICE and being able to certify the software, especially when embedded inside medical devices in critical scenarios is also essential. This also needs to be an iterative process that recognizes that in the case of online learning systems, the behaviour of the system will evolve over time and needs calibration. 

One of the things that really stood out in these recommendations was the adoption of a “no-surprises” mindset: being fully transparent about which data is being used, how the AI system is being utilized, and the limitations of the capabilities of the system. Utilizing techniques like data flow diagrams can assist with this. Completing a stakeholder impact assessment can also help to engender trust from those who are going to be responsible for using the system on an everyday basis. 

In speaking with the vendors during the procurement process, due consideration ought to be given to the level of on-going support that one is expected to receive. Also, if data is going to be collected from the use of the system and sent back to the vendor for training or calibration of the system, this might have IP implications and data governance issues as well. And finally, paying attention to how the system might be decommissioned and disentangled from the remaining software infrastructure also forms a key consideration in making procurement decisions.

India's personal data privacy law triggers surveillance fears (DW)

More and more nations are coming up with their own privacy legislations, and India being home to more than a billion people has also thrown its hat into the ring with the Personal Data Protection (PDP) Act that is due to be enacted in 2021. As per researchers, it is broadly based on the EU’s GDPR, which is the case with many other privacy legislations around the world. Some of the key highlights of the PDP include: laws around storing consumer data, asking for consent before using private information, and periodic audits for companies storing personal data and protocols for reporting any breaches that they experience. 

Storing data locally seems to be one of the most significant implications of the PDP; this will impact the business models of multinational companies that serve Indian users. Personal data is categorized into 3 tiers with each requiring different degrees of protection. As one of the researchers quoted in this article points out, there are concerns with requiring companies to store data locally since it opens up that data to potential state surveillance. 

The intelligence community in India wasn’t created through an Act of Parliament and hence their scope is nebulous. But, for a company where progress across the nation has been uneven, rising digitalization has the potential to bridge the gaps that exist between those that have and those don’t.

Data audit of UK political parties finds laundry list of failings (TechCrunch)

Misuse of personal data in the context of political targeting is one of the premier concerns when it comes to the application of privacy legislation within any nation. The Information Commissioner’s Office (ICO) in the UK published a list of failings on part of the British political parties on how they handled citizens’ personal data. The parties need this data to campaign effectively and reach those who they feel are most likely to vote for them. But, such practices can quickly conflict with privacy legislation. 

While ICO provides an exhaustive list of failures, the recommended actions are soft, and the language around follow-up actions are also weak in their severity as pointed out by researchers quoted in the article. As an example, some of the concerns raised by ICO include lack of transparency and clarity in the privacy notices, lack of appropriate legal bases for collection of personal data, lack of lawful consent for the collection of data, lack of clarity in how this data is being combined with other data for voter profiling, and lack of auditing whether data obtained from third-parties has been obtained legally and continual checks on whether vendors for this data are meeting their data protection requirements. 

After the Cambridge Analytica incident, ICO called for an “ethical pause” on the collection of personal data for political microtargeting. One of the implications with the GDPR has been on joint controllership whereby the responsibilities of the parties that are using social media platforms for campaign targeting need to clearly define their roles and responsibilities so that there is clarity amongst the various actors for what protections need to be undertaken and who is responsible for what.

India’s internet shutdowns function like ‘invisibility cloaks’ (DW)

What happens when the internet is shut down in a region? This article takes a look at the intended and unintended consequences of shutting down the internet in a region and some of the motivations behind taking such actions. A quote mentioned in the article that particularly caught our attention was how an internet shutdown is akin to using an axe to do brain surgery: there is an aspect of collective punishment in this case where there are blanket harms on people that are disproportionate to the actual crisis that such a shutdown could ostensibly help to fix.

One of the concerns with the way the shutdowns were conducted in India was that there wasn’t adequate notice provided to people, and there wasn’t enough information provided on when service would be restored. In addition, given how many activities, including for some small business owners their livelihoods, are conducted online, an internet shutdown also halts important activities in the interest of vaguely defined public safety considerations. A particularly striking comment made in the article looks at the disparate impact on women during a shutdown. In some parts of the country where women end up spending a majority of their time indoors at home, social media and the internet offers them an avenue to connect with the outside world, but during a shutdown, they lose that ability and become dependent on the male members of the household to connect with the outside world. 

The courts in India have requested that the government reconsider the legislation in terms of its scope and transparency. Otherwise, when such shutdowns are utilized to curb violence in the interest of public safety, we might end up in situations where without an outlet to organize peaceful protests, people might turn to violence exacerbating the very problem that such shutdowns are initiated to manage.

Selfies and Sharia police (Rest of World)

One of the last bastions of a free-to-access social media platform in Iran, Instagram has become home to people expressing themselves with freedom and without prying eyes. In a country that restricts how women express themselves on the streets, many young women have taken to Instagram to showcase their style and unique talents. But, a platform that was seen to be frivolous at first is increasingly being policed where a recent law in the country forbids women from appearing without a hijab. One of the teenagers interviewed in the article laments that since the lockdowns began, the authorities have taken to social media to continue to exercise the same influence they used to because women are staying home, out of sight.

Instagram has also become politicized and is being used to spread awareness and messages around some of the political injustices and perhaps this is one of the reasons for increased scrutiny of people’s activity on the platform. Some of the more famous accounts have put up notices declaring that they conform with the latest religious diktats in an effort to evade any coercive actions against their accounts. 

Some expats utilize the platform to stay in touch with the ongoings of their local communities and this self-censure has limited their ability to get a true glimpse of what is happening in their own countries. The effects of this have resonated across the world in another sense as well whereby people residing outside of Iran have also started recalibrating their online presence to conform with the latest laws so that they don’t run afoul of them. 

From elsewhere on the web:

Commissioner issues proposals on regulating artificial intelligence (Office of the Privacy Commissioner of Canada)

The Office of the Privacy Commissioner of Canada (OPC) has released key recommendations for regulating AI. The recommendations are the result of a public consultation launched earlier this year, which we contributed to.

“Artificial intelligence has immense promise, but it must be implemented in ways that respect privacy, equality and other human rights,” said Commissioner Daniel Therrien. “A rights-based approach will support innovation and the responsible development of artificial intelligence.”

Guest post:

Breaking Your Neural Network with Adversarial Examples by Kenny Song (@helloksong). Co-founder of Citadel AI.

Fundamentally, a machine learning model is just a software program: it takes an input, steps through a series of computations, and produces an output. In fact, all software has bugs and vulnerabilities, and machine learning is no exception.

To delve deeper, read the full piece here.

If you’ve got an informed opinion on the impact of AI on society, consider writing a guest post for our community — just send your pitch to You can pitch us an idea before you write, or a completed draft.

In case you missed it:

The State of AI Ethics Report (Oct 2020)

Here's our 158-page report on The State of AI Ethics (October 2020), distilling the most important research & reporting in AI Ethics since our June report. This time, we've included exclusive content written by world-class AI Ethics experts from organizations including the United Nations, AI Now Institute, MIT, Partnership on AI, Accenture, and CIFAR.

To delve deeper, read the full report here.

Take Action:

Community Nomination Form - State of AI Ethics January 2021 Report

We’re inviting the AI ethics community to nominate researchers, practitioners, advocates, and community members in the domain of AI ethics to be featured in our upcoming State of AI Ethics report.

Nominate them here!

Help us understand privacy preferences on social media in India

Privacy on social media has increasingly become an important issue for participation on the Internet. This survey aims to collect information on Indian social media users’ privacy attitudes and behaviours. Our four platforms of focus are Facebook, YouTube, LinkedIn, and Twitter. You can find out more about our research here.

Take the survey here!

MAIEI Learning Community

Interested in discussing the ethical challenges of AI in addressing some of the biggest ethical challenges of AI to develop interdisciplinary solutions with thinkers from across the world?

Join our learning community!

Our AI Ethics consulting services

In today’s market, the make-or-break feature for organizations using AI is whether they embody the principles of morality and ethics.

We want to help you analyze your organization and/or product offerings for moral and ethical weaknesses and make sure your business model is airtight. By undergoing a thorough, multidisciplinary review of your AI tool, we will provide ethical, social, and technical feedback on your work and research, which will allow you to proactively address your own blindspots and maximize your potential before ever undergoing a third-party ethics review.

Book a consultation with us today!


As a part of our public competence building efforts, we host events frequently spanning different subjects as it relates to building responsible AI systems. We also share events from the broader AI ethics ecosystem.

The State of AI Ethics (Panel), hosted by us!

  • Topic: 2020 in review, from an AI Ethics perspective

  • Speakers: Rumman Chowdhury (Accenture), Danit Gal (United Nations), Katya Klinova (Partnership on AI), Amba Kak (NYU’s AI Now Institute), Abhishek Gupta (Montreal AI Ethics Institute). Moderated by Victoria Heath (Montreal AI Ethics Institute).

  • Date: Wednesday, December 2nd from 12:30 PM EST – 2:00 PM EST

  • Free tickets via Eventbrite: here!

Signing off for this week, we look forward to it again in a week! If you enjoyed this and know someone else that can benefit from this newsletter, please share it with them!

Share Montreal AI Ethics Institute

If someone has forwarded this to you and you want to get one delivered to you every week, you can subscribe to receive this newsletter by clicking below:

If you have feedback for this newsletter or think there’s something interesting we missed, email us at