AI Ethics #14: Good Deepfakes, Digital New Deal, Robustness of Neural Networks, Misinformation on Whatsapp, Geo-indistinguishability and more ...
Trust and Transparency in Contact-tracing apps, harnessing adversarial examples for AI systems, and other highlights from the world of AI Ethics!
Welcome to the fourteenth edition of our weekly newsletter that will help you navigate the fast changing world of AI Ethics! Every week we dive into research papers that caught our eye, sharing a summary of those with you and presenting our thoughts on how it links with other work in the research landscape. We also share brief thoughts on interesting articles and developments in the field. More about us on: https://montrealethics.ai/about/
If someone has forwarded this to you and you want to get one delivered to you every week, you can subscribe to receive this newsletter by clicking below:
State of AI Ethics June 2020 report:
We released our State of AI Ethics June 2020 report which captured the most impactful and meaningful research and development from across the world and compiled them into a single source. This is meant to serve as a quick reference and as a Marauder’s Map to help you navigate the field that is evolving and changing so rapidly. If you find it useful and know others who can benefit from a handy reference to help them navigate the changes in the field, please feel free to share this with them!
Santa Clara Principles:
In April 2020, the Electronic Frontier Foundation (EFF) publicly called for comments on expanding and improving the Santa Clara Principles on Transparency and Accountability (SCP), originally published in May 2018. The Montreal AI Ethics Institute (MAIEI) responded to this call by drafting a set of recommendations based on insights and analysis by the MAIEI staff and supplemented by workshop contributions from the AI Ethics community convened during two online public consultation meetups.
Summary:
In research summaries this week, we cover trust and transparency in contact-tracing applications, detecting misinformation on Whatsapp without breaking encryption, evaluating robustness of neural networks, explaining and harnessing adversarial examples, and geo-indistinguishability as a differential privacy framework for location data.
In article summaries this week, we cover how there are bigger issues than data with the Apple-Google contact-tracing kit, how public goods are lost to Big Tech, how compute and labor are important pieces in the flywheel of AI apart from data, a digital new deal as a significant overhaul of the existing regulations in technology, nuanced conversations on AI and China, and how deepfakes can actually be used for doing some good.
Our learning communities and the Co-Create program continue to receive an overwhelming response! Thank you everyone!
We operate on the open learning concept where we have a collaborative syllabus on each of the focus areas and meet every two weeks to learn from our peers. You can fill out this form to receive an invite!
Research:
Let's look at some highlights of research papers that caught our attention at MAIEI:
Photo by Siora Photography on Unsplash
Trust and Transparency in Contact Tracing Applications by Stacy Hobson, Michael Hind, Aleksandra Mojsilovic´ and Kush R. Varshney
In a matter of days, a contact tracing application will be deployed in Ontario. It is estimated that 50-60% of the population must use the app in order for it to work as intended, warning individuals of exposure to COVID-19. But, how much do we really know about this technology? Of course, automatic contact tracing can be more accurate, efficient and comprehensive when identifying and notifying individuals who have been exposed to the virus relative to manual contact tracing; but, what are the trade-offs of this solution? To guide our thinking, authors of “Trust and Transparency in Contact Tracing Applications” have developed FactSheets, a list of questions users should consider before downloading a contact tracing application.
To delve deeper, read our full summary here.
Detecting Misinformation on WhatsApp without Breaking Encryption by Reis, J. C. S., Melo, P., Garimella, K., & Benecenuto, F.
Facebook may own WhatsApp, but it is different from that of typical social media sites such as Facebook and Twitter. WhatsApp has end-to-end encryption that has made this app unique in communication with others. WhatsApp has over 1.5 billion users and has become a source for sharing news in countries like Brazil and India, where smartphone’s use for news access is higher than other devices (Reis et al., 2020). This research study focuses on Brazil and India’s two countries and how misinformation has affected the democratic discussion in these countries. There are over 55 billion messages sent a day, with about 4.5 billion messages are images (Reis et al., 2020). Due to the nature of encryption, there is no way that WhatsApp monitors or flags inappropriate or potentially dangerous or fake images as Facebook has the capability of doing. The researchers propose an approach with machine learning, where WhatsApp can automatically detect when a user shares images and videos that have previously been labeled as misinformation with the Facebook database.
To delve deeper, read our full summary here.
Towards Evaluating the Robustness of Neural Networks by Nicholas Carlini and David Wagner
Defensive distillation is a defense proposed for hardening neural networks against adversarial examples whereby it defeats existing attack algorithms and reduces their success probability from 95% to 0.5%.
The paper is set on the broad premise of robustness of neural network to avert an adversarial attack. It lays out the two clear factors (a) Construct proofs of lower bound for robustness and (b) Demonstrate attacks for upper bound on robustness. The paper attempts to move towards the second while explaining the gaps in first (essentially the weakness of distilled networks).
To delve deeper, read our full summary here.
From our learning communities:
Research that we covered in the learning communities at MAIEI this week, summarized for ease of reading:
Explaining and Harnessing Adversarial Examples by Ian J. Goodfellow, Jonathan Shlens and Christian Szegedy
A bemusing weakness of many supervised machine learning (ML) models, including neural networks (NNs), are adversarial examples (AEs). AEs are inputs generated by adding a small perturbation to a correctly-classified input, causing the model to misclassify the resulting AE with high confidence. Goodfellow et al. propose a linear explanation of AEs, in which the vulnerability of ML models to AEs is considered a by-product of their linear behaviour and high-dimensional feature space. In other words, small perturbations on an input can alter its classification because the change in NN activation (as result of the perturbation) scales with the size of the input vector.
To delve deeper, read our full summary here.
Geo-indistinguishability: Differential privacy for location-based systems by Miguel Andrés; Nicolás Bordenabe; Konstantinos Chatzikokolakis; and Catuscia Palamidessi
The authors discuss how the onslaught of location-based systems (LBS) has resulted in considerable challenges to locational privacy. Add to this the fact that most of such individual data (about locations) is stored in unknown and arguably unsecure servers, there is a need to safeguard an individual’s exact location whilst she uses a LBS. Geo-indistinguishability is the novel mechanism this paper proposes to ensure the balance where a user of a LBS discloses just enough of her approximate location to efficiently benefit from these services, while not divulging her precise location.
To delve deeper, read our full summary here.
Articles:
Let’s look at highlights of some recent articles that we found interesting at MAIEI:
Privacy is not the problem with the Apple-Google contact-tracing toolkit (The Guardian)
From the creator of the DP3T framework for doing decentralized privacy-preserving proximity tracing, this article offers a fresh take on some of the problems that we face when debating contact- and proximity-tracing apps that have the potential to reshape our social fabric if they are deployed widely. It starts by pointing out the history of passports that were introduced temporarily during WW1 but were retained during the time of the Spanish flu as a means of curbing the spread of that pandemic. Measures introduced during times of emergencies have the potential to persist beyond their original purpose and they can morph into mechanisms that have the power to rethread societal fabric as we know it.
In the push for having decentralized contact- and proximity-tracing, the Apple-Google protocol offers an apparent win that is hard to replace, especially given its ubiquity in terms of covering almost everyone with a smartphone. The UK and France have advocated for centralized protocols citing reasons of reducing fraud and lowering risks of snooping behaviour. But, in the midst of all this, an inherently adversarial framing has emerged which pits large corporations against nation states, each viewing the other as a sovereign entity that doesn’t have any other recourse towards arriving at meaningful solutions.
While data privacy can be a strong reason for the push for having decentralized apps, this doesn’t detract from the problem of having centralized control over the compute infrastructure who will still be able to assert significant control over society. The article concludes by pointing out that deflating digital power isn’t just about the governance surrounding data but more so understanding the systemic forces at play in the underlying infrastructure.
The Loss Of Public Goods To Big Tech (Noema Magazine)
A timely article that elucidates how there is a massive asymmetry between private and public goods - it highlights essentially how private corporations are able to benefit from the vast public infrastructure that helps to support their activities while they eschew their responsibility to contribute back into the pool from which they draw. Pulling our attention to “charity theatre”, the author asks us to examine how philanthropic efforts from these corporations are minimal drops in the bucket compared to the benefits that they are able to extract from the publicly funded facilities. In terms of the tax benefits and offshoring practiced by the firms and how their expenditures on technology that enables malpractices like the suppressive use of facial recognition technology on people in the BLM movement, the author advocates for rerouting that money towards other initiatives that have the potential to bring prosperity for people.
The platforms are also able to polarize society through the passive role that they play in the propagation of misinformation and other harmful content while simultaneously profiting from it. The incentives from the public welfare and business standpoint are so misaligned that they are poised in an almost zero-sum game where they are encouraged to take from one to benefit the other.
Instead of helping existing public infrastructure, the firms actively profit from its erosion as it drives more users and customers to them looking for products and services that might have been provided elsewhere by the public sector. There is also the asymmetry in how crises like the current pandemic exacerbate the impact faced by those who have too little while those who are powerful have to give up close to nothing, often even gaining in the times of such crises.
Finally, making a call for addressing problems like the current pandemic, at least in part, the author appeals to us to make a collective effort to help each other through these times, citing that it would be unjust and unethical to not do so. The pandemic has the potential to reshape society significantly, now is the time that we grab the opportunity to shape it into something that does benefit us all.
Data, Compute, Labour (Ada Lovelace Institute)
A timely piece with the recent announcement for a push to have a National Research Cloud in the US, this article moves the conversation beyond the familiar trope of data powering the flywheel of building monopolies in the technology domain. Zooming out a bit, we see that there are other factors that play an equally important role in spinning that flywheel - namely, compute capacity and labor. In particular, as highlighted in research work from the Montreal AI Ethics Institute here, compute capacity plays a polarizing role in how research and development is done in the domain of AI. As we move towards using larger models that are even more computationally expensive, this has a strongly prohibiting effect on those that don’t have access to large clusters of computing power (i.e. typically people outside of universities and large corporations) which chills the scientific discovery process. By no means are we advocating that large models shouldn’t be explored, instead we ask that people investigate a bit more on how we can move towards more compute-efficient architectures and approaches. In the same vein, having more public goods-styled compute facilities and data stores will also allow more people outside of these traditional venues to participate in AI research.
The other factor which is labor, especially highly-skilled workers who are able to work with this hardware are often scooped up from various places and concentrated in corporate hubs where they are paid exorbitant amounts of money. This has notably had effects in how academia has been losing star researchers to corporations (even when they have dual roles, they still are able to dedicate less time to the nurturing of the next generation of talent. Some might argue that there is benefit in having these dual appointments because of the mix of industry and research experience that they can bring in, yet only time will tell if that effect actually gets realized in practice).
How deepfakes could actually do some good (Vox)
In breaking away from the oft-cited examples of how “deepfakes” can cause harm, and there are all the reasons to do that given the abuses that have resulted from the use of this technology, this article sheds a new light on how this technology could be used for good. Such a dual-use for this technology, something that the AI domain is quite familiar with, poses new challenges in assessing the regulatory stance around how to think about the public use of this technology.
The article mentions the upcoming documentary about Welcome to Chechnya on HBO where members of the LGBTQ community who are heavily persecuted there share their experiences in the documentary without having to give up their identities. They are essentially anonymized by digitally grafting on faces of volunteers who the documentary makers call “activists” in an attempt to humanize them and better convey the experiences of those people rather than the typical techniques used in such efforts where the face of the person is just blotted out. The creators of the documentary consulted with researchers experienced in neuroscience and psychology to make sure that they minimized the “Uncanny Valley” effect to prevent the experience of the audiences from being too jarring. The results, while admittedly slightly off, do a great job of making it quite realistic.
There are ethical concerns even when this sort of technology is used explicitly for the good, as it might backfire against the “activists”, it nonetheless has great potential to empower people to share their experiences without being traumatized by having to reveal their identities. There are similar instances of using this technology in Snapchat filters for victims of sexual abuse to share their experiences more freely. Ultimately, what separates the good use of “deepfakes” from the bad uses is clear consent from those whose likeness is being used, from those on whom it is used, and clear articulation of the purpose for which this is being done.
Is it time for a ‘Digital New Deal’ to rein in Big Tech? (Protocol)
Advocating for a significant overhaul in regulation and legislation around the power that Big Tech holds, this article talks about the “Digital New Deal” to evoke in people the desire to move towards broad changes rather than incrementalist reforms that currently plague the system. The four pillars of this deal are more robust antitrust enforcement, nondiscrimination principles, transparency, and public utility regulation.
Going into some details on each, the article mentions how we need a higher degree of scrutiny and reflection in how the power dynamics are framed when thinking about the required regulations. It especially discusses the current ecosystem in the US whereby the calls for regulating Big Tech are mired in political motivations as well, which have the potential of weakening some of the calls to action because of people’s specific political alignments.
As we’ve mentioned in many past editions of this newsletter, a bold move towards action is required and that needs steps to be outlined in a clear and concise fashion so that there is impetus to move and act on them rather than be paralyzed by over-analysis. This proposal is a positive move in that direction.
China and AI: what the world can learn and what it should be wary of (The Conversation)
A brief but much required article that offers a balanced view on the developments in AI coming out of China. Specifically, we require a critical look into the trope of pitting the US against China in an “AI arms race” which already sets the stage in an adversarial manner. Instead, we need to weigh the pros and cons from a more holistic standpoint rather than taking a reductionist view of the whole debate.
The article highlights some of the positive developments coming out of China in the use of AI, keeping in line with the current pandemic, some of the spotlighted solutions are the use of AI in medicine, geared towards combating COVID-19. There is emerging evidence on how this can benefit those who don’t live in regions with sufficient healthcare resources.
In the negative developments segment, most readers of this newsletter are already familiar with a lot of the problems that arise from the unmitigated use of facial recognition technology applied indiscriminately even in the face of overwhelming evidence that there are many false positives and errors. A key requirement in making better use of this technology is transparency and accountability as fundamental tenets incorporated into how these systems are deployed.
Lastly, another outsider perspective which is clarified in this article is how many perceive that the national AI strategy in China is a key driver for how the ecosystem is shaped, yet there are ample grassroots and municipally driven initiatives that are reimagining and interpreting the national AI strategy tailoring it to the local context and economy. Ultimately, the article calls for more informed discussions on the subject to evade the crutch of reductionism which will ultimately harm the quality of debate on this subject.
From the archives:
Here’s an article from our blogs that we think is worth another look:
The Social Contract for AI by Mirka Snyder Caron and Abhishek Gupta
Like any technology, AI systems come with inherent risks and potential benefits. It comes with potential disruption of established norms and methods of work, societal impacts and externalities. One may think of the adoption of technology as a form of social contract, which may evolve or fluctuate in time, scale, and impact. It is important to keep in mind that for AI, meeting the expectations of this social contract is critical, because recklessly driving the adoption and implementation of unsafe, irresponsible, or unethical AI systems may trigger serious backlash against industry and academia involved which could take decades to resolve, if not actually seriously harm society.
For the purpose of this paper, we consider that a social contract arises when there is sufficient consensus within society to adopt and implement this new technology. As such, to enable a social contract to arise for the adoption and implementation of AI, developing: 1) A socially accepted purpose, through 2) A safe and responsible method, with 3) A socially aware level of risk involved, for 4) A socially beneficial outcome, is key.
To delve deeper, read the full article here.
Guest contributions:
The Ethics of AI in Medtech: A Discussion With Abhishek Gupta by Jeremie Abitibol, Co-founder and CEO at Castella Medical
The development of new computational methods and the heightening awareness around the value of artificial intelligence (AI) is finally starting to garner interest within the healthcare industry. The sensitive nature of healthcare services, however, creates medicolegal challenges and may entail particular ethical dilemmas for emerging AI solutions.
To discuss some of the challenges that might be faced by players in the data management and AI space, Jeremie Abitbol, PhD, CEO of Castella Medical, Inc., is speaking with Abhishek Gupta, founder of the Montreal AI Ethics Institute.
To delve deeper, read the full article here.
If you’ve got an informed opinion on the impact of AI on society, consider writing a guest post for our community — just send your pitch to support@montrealethics.ai. You can pitch us an idea before you write, or a completed draft.
Events:
As a part of our public competence building efforts, we host events frequently spanning different subjects as it relates to building responsible AI systems, you can see a complete list here: https://montrealethics.ai/meetup
AI Ethics: UNESCO AI Ethics Public Consultation
July 15, 11:45 AM - 1:15 PM ET (Online)
AI Ethics: Ontario Government Alpha Principles on AI
July 22, 11:45 AM - 1:15 PM ET (Online)
You can find all the details on the event page, please make sure to register as we have limited spots (because of the online hosting solution).
From elsewhere on the web:
Things from our network and more that we found interesting and worth your time.
Our staff researchers ran a 90 minutes workshop on AI Ethics for 30 emerging researchers and practitioners in AI at the AI4Good Summer Lab supported by CIFAR, OSMO, Google, DeepMind, and other industry partners. If you’d like to do the same for your organization, please don’t hesitate in hitting reply to this email or sending an email to support@montrealethics.ai.
Camylle Lanteigne, Staff Researcher at MAIEI, presented her work on SECure A Social and Environmental Certificate for AI Systems at the Tracing the Veins conference.
“The Montreal AI Ethics Institute, a nonprofit research organization dedicated to defining humanity’s place in an algorithm-driven world, today published its inaugural State of AI Ethics report. The 128-page multidisciplinary paper, which covers a set of areas spanning agency and responsibility, security and risk, and jobs and labor, aims to bring attention to key developments in the field of AI this past quarter.” - Kyle Wiggers, VentureBeat
Signing off for this week, we look forward to it again in a week! If you enjoyed this and know someone else that can benefit from this newsletter, please share it with them!
If you have feedback for this newsletter or think there is an interesting piece of research, development or event that we missed, please feel free to email us at support@montrealethics.ai