AI Ethics #17: Trustworthy ML, Decolonial AI, organized social media manipulation, digital sock puppets, unpredictability of AI and more ...
Ambiguous labor impacts of automation, exorbitant costs of training ML, trusting digital assistants with our privacy, purgatory of digital punishment, and more from the world of AI Ethics!
Welcome to the seventeenth edition of our weekly newsletter that will help you navigate the fast changing world of AI Ethics! Every week we dive into research papers that caught our eye, sharing a summary of those with you and presenting our thoughts on how it links with other work in the research landscape. We also share brief thoughts on interesting articles and developments in the field. More about us on: https://montrealethics.ai/about/
If someone has forwarded this to you and you want to get one delivered to you every week, you can subscribe to receive this newsletter by clicking below:
Summary of the content this week:
In research summaries this week, we talk about the ambiguous labor impacts of automating prediction, what does it mean for machine learning to be trustworthy, decolonial theory as sociotechnical foresight in AI, trust and transparency in contact tracing applications, can robots be taught to become more human, and a global inventory of social media manipulation.
In article summaries this week, the failure of AI to mitigate climate change, unpredictability of AI, can we trust digital assistants to keep our data private, the rising costs of training machine learning systems, the purgatory of digital punishment, and how fake accounts manipulate what we see on social media.
MAIEI Community Initiatives:
Our learning communities and the Co-Create program continue to receive an overwhelming response! Thank you everyone!
We operate on the open learning concept where we have a collaborative syllabus on each of the focus areas and meet every two weeks to learn from our peers. You can fill out this form to receive an invite!
MAIEI Serendipity Space:
The first session was a great success and we encourage you to sign up for the next one!
This will be a 30 minutes session from 12:15 pm ET to 12:45 pm ET so bring your lunch (or tea/coffee)! Register here to get started!
State of AI Ethics June 2020 report:
We released our State of AI Ethics June 2020 report which captured the most impactful and meaningful research and development from across the world and compiled them into a single source. This is meant to serve as a quick reference and as a Marauder’s Map to help you navigate the field that is evolving and changing so rapidly. If you find it useful and know others who can benefit from a handy reference to help them navigate the changes in the field, please feel free to share this with them!
Research:
Let's look at some highlights of research papers that caught our attention at MAIEI:
Photo by Hans-Peter Gauster on Unsplash
Artificial Intelligence: The Ambiguous Labor Market Impact of Automating Prediction by Ajay Agrawal, Joshua S. Gans, and Avi Goldfarb
The automation impacts of artificial intelligence have been the subject of much discussion and debate, often hampered by a poor demarcation of the limits of AI. Agrawal, Gans, and Goldfarb have provided a framework that helps us understand where AI fits into organizations and what tasks are at risk of being automated. They arguethat AI is fundamentally a prediction technology, and prediction is one of the key aspects of a decision-task, though not the only one. Things like judgement and action are also critical parts of decision-making and are not susceptible to direct automation by AIs. This does not mean, however, that they will not be affected by improved prediction, since the value of judgement and action may change as predictions become cheaper, better, and faster. Using this framework they identify four possible ways AI can affect jobs: replacing prediction tasks, replacing entire decision tasks, augmenting decision tasks, and creating new decision tasks.
To delve deeper, read our full summary here.
What does it mean for ML to be trustworthy? by Nicolas Papernot et al.
With the world being increasingly governed by machine learning algorithms, how can we make sure we can trust in this process? Nicolas Papernot’s 33-minute video provides his and his group’s findings on how to do just that. Ranging from topics of robustness, LP norms, differential privacy and deepfakes, the video focuses on two areas of making ML trustworthy. These are admission control at test time, and model governance. Considered at length, both are proposed as ways forward for helping make ML more trustworthy through their improvement on privacy, but they are not full-proof. A thought-provoking conclusion is then achieved to do with the alignment of human and ML norms.
To delve deeper, read our full summary here.
Decolonial AI: Decolonial Theory as Sociotechnical Foresight in Artificial Intelligence by Shakir Mohamed, Marie-Therese Png, William Isaac
Although it may not always seem evident, the development of AI technologies is a part of the patterns of power that characterize our intellectual, political, economic and social worlds. Recognizing and identifying these patterns is essential to ensure that those at the bottom of society are not disproportionately affected by the adverse effects of technological innovation. To protect and prevent harm against vulnerable groups, the authors recommend adopting a decolonial critical approach in AI to gain better foresight and ethical judgement towards advances in the field. They offer three tactics that can lead to the creation of decolonial artificial intelligence: creating a technical practice for AI, seeking reverse tutelage and reverse pedagogies and the renewal of affective and political communities.
To delve deeper, read our full summary here.
From our learning communities:
Research that we covered in the learning communities at MAIEI this week, summarized for ease of reading:
Challenging Truth and Trust: A Global Inventory of Organized Social Media Manipulation by Samantha Bradshaw and Philip N. Howard
As social media platforms not only allow political actors to reach massive audiences, but also to fine-tune target audiences by location or demographic characteristics, they are becoming increasingly popular domains to carry out political agendas. Governments across the world—democratic and authoritarian alike—are expanding the capacity and sophistication of their “cyber troops” operations to capitalize on this medium of communication. In this report, Samantha Bradshaw and Philip N. Howard document the characteristics of 48 countries’ computational propaganda campaigns. While the size, funding, and coordination capacities of each country’s online operations vary, one thing remains clear, regardless of location: social media platforms face an increased risk of artificial amplification, content suppression, and media manipulation.
To delve deeper, read our full summary here.
Articles:
Let’s look at highlights of some recent articles that we found interesting at MAIEI:
Can AI mitigate the climate crisis? Not really. (AlgorithmWatch)
With so much of a push in the industry in using AI for X (where X refers to any domain), it isn't surprising that some of the claims would be overblown, as covered previously in this newsletter. Specifically, the use of AI in mitigating climate change touted as the next frontier might be "innovation theater" rather than substance as this investigative piece highlights. Based on the perspectives from Germany, a land where there is a high concentration of wind energy production and use, at the same time, it is also home to some of the largest auto-manufacturers. Germans have higher than average CO2 footprints. In trying to dig deeper into concretely deployed solutions using AI to mitigate CO2 emissions, one of the examples, half-jokingly, was about smart washing machines. They already have other features that use timers to schedule running your laundry at times when there might be a lower burden on the grid.
Parallelly, more accurate energy demand forecasting could enable higher use of renewable energy sources and minimize wastage. Researchers pointed out that at the end of the day, AI is just another tool in the toolbox and not a catch-all solution, supported by the fact that there is limited empirical evidence for the efficacy of machine learning systems as it relates to climate change mitigation.
Consultancy firms like PwC create listicles citing the potential uses of this technology without scientific evidence, and such reports get picked up by other outlets propagating the same information. Unfortunately, imaginary use is not sufficient to solve the climate change crisis. An oft-ignored aspect of this trope is the impact of running deep learning models has on the environment, something covered in the research work by the Montreal AI Ethics Institute.
Unpredictability of Artificial Intelligence (Hackernoon)
AI governance and safety literature aims to implement intermediary controls to enhance security and produce leading indicators to predict outcomes from the system. This article dives into how complex AI systems are inherently unpredictable through examples of Rice's theorem and Wolfram's Computational Irreducibility. It states that we can't accurately comment on the transitory steps that the AI system will take even if we have full knowledge of the end goals of the system. It can be formally measured by Bayesian surprise that calculates the difference between the prior and posterior beliefs of an agent.
The author further motivates this by expounding on Vinge's principle and instrumental convergence. Specifically, in designing an advanced agent, we might have to approve the design of the system even if we don't know all the decisions that it might take. We can surmise the potential purposes for which it was built based on the designs which we observe even if we don't know the precise goal.
Cognitive uncontainability further expands on this idea by pointing out that a human mind cannot perceive all decisions that an agent might take, especially one that might have access to facts and knowledge that are invisible to us. A couple of examples that illustrate that are partial knowledge of an inherently rich domain (such as human psychology) and projections of the future (such as 10th-century humans not knowing the capabilities of humans in the 21st century). Alonzo Church and Alan Turing had mathematically proven that it is impossible to ascertain that an algorithm fulfills several properties without executing it. Finally, AI systems can become complex enough to be potentially threatening to human safety, hence the domain of AI safety warrants investigation.
Can We Trust Digital Assistants to Keep Our Data Private? (Eye on Design)
Privacy is a topic that is covered ad nauseam in popular media. But, everyday citizens still have minimal awareness about the actual functioning of apps that they download and what happens to their data once they click "I accept." This article explores two conceptual ideas called Personal Privacy Assistant and Kagi that intend to guide the user in understanding various privacy policies.
The Personal Privacy Assistant serves up insights from the privacy policies and presents them to the users so that they can make more informed choices. The current privacy policy regime is notorious for burying information in legalese. Privacy settings are also hard to navigate; people never move away from the default options that benefit the platforms. But such assistants need to be helpful without being annoying. Many design moments need to be configured in the right manner so that they don't fatigue the user. Through a transitory process, they can earn the trust of the user by proving their efficacy and value. Kagi functions similarly, acting as an intermediary between the interests of the user and the platform. It depends on a potential future where there might be greater alignment between the welfare of the users and the business motives of the platform. Till then, we must do the best we can to help people navigate the existing privacy and data regulations.
The cost of training machines is becoming a problem (The Economist)
AI presents an opportunity to act as a democratizing force. But, with recent advances in large-scale models accompanied by massive data requirements, the field has been skewed towards those who have the resources to participate in this increasingly competitive ecosystem. The most recent GPT-3 model with 175 billion parameters is not something that you could train on a few GPU instances that you spin up in the cloud. It requires access to heavy computing and the dollars to pay for it.
In some of the work by the Montreal AI Ethics Institute, we have proposed the requirement for evaluating these inequities more holistically so that AI can live up to its potential of benefitting all.
Innovations at both the software and the hardware level aim to better leverage the potential of AI. Specifically, novel fabrication approaches, chip architecture optimizations tailored for the kind of computations used in AI, and tinkering with the fidelity of the values used in computations, all have the potential to squeeze more from existing and new hardware.
Quantum computing is another avenue that can have a massive impact on how AI development happens. Some researchers even advocate for neuromorphic approaches, ones that mimic how the human brain works, as a methodology for achieving higher computational performance for the same levels of energy consumption.
The Purgatory of Digital Punishment (Slate)
Crime and punishment, an inexorable part of society, have taken on a new dimension. This article goes into detail on how punishment continues in the digital realm far beyond the asks made of the guilty. Specifically, digital purgatory is far worse in its outcomes for both those who are guilty and especially for those who are not. Data brokers are notorious for trading private data in hidden markets for financial gain. But, the extent to which this happens became apparent to people interviewed in this article when they found that mistaken identities, sealed records, past crimes, and other issues continued to create problems that became impossible to address completely.
While some of these items are supposed to be protected by legal mechanisms, the internet is indiscriminate in keeping these alive, making the "right to be forgotten" something impossible to achieve in practice. One of the problems is that as records are propagated, downloaded, reshared, they become decontextualized and stale making them problematic. The original crime bookkeeping operations are superseded by digitally-powered operations, which churn out millions of public records that are consumable by machines. Uneven rollouts and competing legal and political mandates exacerbate the problems.
Privacy inequities are the unfortunate consequence of this mayhem. Specifically, those who are marginalized bear a disproportionate burden since they have the fewest means to address and fix these problems. The victims are the ones who have the onus to correct their record, while those in power can abdicate their responsibility for maintaining accurate records. The article concludes by making the call for implementing regulations in the same manner as they apply to medical records and credit reports. The companies doing background checks should be held legally accountable for errors. We have a choice to make: resigning to technological determinism is not the answer.
How fake accounts constantly manipulate what you see on social media – and what you can do about it (The Conversation)
As covered several times in this newsletter, disinformation rears many ugly heads. Especially on social media in times of discord, the prevalent issues can be used by malicious actors to polarize and further divide society. Fake accounts that are operated by these actors who seek to spread disinformation go by the moniker of "sock puppets" because they are operated by someone else's hand. Sometimes the deception is easy to spot by looking at the account history and the pattern of use.
But, as covered in the learning communities at the Montreal AI Ethics Institute, there are more sophisticated information operations that make this deception much harder to unearth. Even for those who study this phenomenon, often it is hard to separate truth from fiction. When there is a legitimate middle ground, divisiveness in society weakens our democratic institutions. Platform companies that have the most power to correct this imbalance have the fewest incentives to invest in addressing this problem because divisive content is good for business. Sometimes, this failure to act gets attributed to impinging on freedom of speech. But, this freedom of speech issue creates many other problems that are detrimental to social order. The author of this article calls for readers to use social media sparingly, just as we would control our use of addictive substances.
From the archives:
Here’s an article from our blogs that we think is worth another look:
Trust and Transparency in Contact Tracing Applications by Stacy Hobson, Michael Hind, Aleksandra Mojsilovic´ and Kush R. Varshney
In a matter of days, a contact tracing application will be deployed in Ontario. It is estimated that 50-60% of the population must use the app in order for it to work as intended, warning individuals of exposure to COVID-19. But, how much do we really know about this technology? Of course, automatic contact tracing can be more accurate, efficient and comprehensive when identifying and notifying individuals who have been exposed to the virus relative to manual contact tracing; but, what are the trade-offs of this solution? To guide our thinking, authors of “Trust and Transparency in Contact Tracing Applications” have developed FactSheets, a list of questions users should consider before downloading a contact tracing application.
To delve deeper, read the full article here.
Guest contributions:
Can We Teach AI Robots How to Be Human? by Jen Brige
Artificial intelligence and robots have gotten steadily more advanced in recent years. It’s been a long road to that point, but as Wired’s history of robotics puts it, the technology “seems to be reaching an inflection point” at which processing power and AI can produce truly smart machines. This is something most people who are interested in the topic have come to understand. What comes next though might be the question of how human we can make modern robot — and whether we really need or want to.
To delve deeper, read the full article here.
If you’ve got an informed opinion on the impact of AI on society, consider writing a guest post for our community — just send your pitch to support@montrealethics.ai. You can pitch us an idea before you write, or a completed draft.
Events:
As a part of our public competence building efforts, we host events frequently spanning different subjects as it relates to building responsible AI systems, you can see a complete list here: https://montrealethics.ai/meetup
AI Ethics: UNESCO AI Ethics Public Consultation
July 29, 11:45 AM - 1:45 PM ET (Online)
India: Public Consultation on UNESCO'S Recommendation on the Ethics of AI
August 3, 9:30 AM - 11:30 AM ET (Online)
AI Ethics: The World Economic Forum's AI Procurement in a Box
August 5, 11:45 AM - 1:15 PM ET (Online)
You can find all the details on the event page, please make sure to register as we have limited spots (because of the online hosting solution).
From elsewhere on the web:
Things from our network and more that we found interesting and worth your time.
Our founder Abhishek Gupta will be hosting a panel discussion on Ethics, Fairness, and Bias in AI with Zachary Chase Lipton, Irina Rish, and Natalie Schluter. You can register for the session here.
Our team will be hosting a workshop with the CMU AI Audit Lab on responsible AI - this is a follow up to the workshop that the team had hosted for the CIFAR and OSMO AI4Good Lab. If you’d like for us to host a workshop for your organization, please don’t hesitate in reaching out to us.
Work from our researchers Abhishek Gupta and Erick Galinkin on Green Lighting ML: Confidentiality, Integrity, and Availability of Machine Learning Systems in Deployment has been accepted for presentation at the Montreal AI Symposium 2020.
Signing off for this week, we look forward to it again in a week! If you enjoyed this and know someone else that can benefit from this newsletter, please share it with them!
If you have feedback for this newsletter or think there is an interesting piece of research, development or event that we missed, please feel free to email us at support@montrealethics.ai