AI Ethics #5 : Bias Monsters, AI canaries, empathy machines, data feminism, technical details for COVID-19 AI apps and more ...
Our fifth weekly edition covering research and news in the world of AI Ethics
Welcome to the fifth edition of our weekly newsletter that will help you navigate the fast changing world of AI Ethics! Every week we dive into research papers that caught our eye, sharing a summary of those with you and presenting our thoughts on how it links with other work in the research landscape. We also share brief thoughts on interesting articles and developments in the field. More about us on: https://montrealethics.ai/about/
If someone has forwarded this to you and you want to get one delivered to you every week, you can subscribe to receive this newsletter by clicking below
Given the positive response we got from our readers last week, we’re continuing to share original artwork along with witty snippets (this time with help from machines) to bring a bit of cheer to your day!
For this edition, we felt it important to highlight some practical technical and ethics advice for people who are building solutions to fight the COVID-19 pandemic. Take a look at this week’s research summary for technical considerations and our external feature for ethics considerations. As always enjoy the content and follow the advice of your health authorities. Stay safe and healthy!
An AI Picnic - by Abhishek Gupta
Picnics with humans seem like a fun activity till you realize they like sandwiches. A guy, who showed me his basement, told me it was his savings/opening wallet from 2002 (or something like that). He was planning on making small gourmet sandwiches, just like he eats everyday. (I don't remember his exact words but… he had a lot of creative free rein. "I am only looking for affordable substitutes for meat and cheese.") And yeah… he also said he'd probably never make it out of the basement. He does say he has a bigger kitchen though.
Text generated from the Talk to Transformer tool where the text in bold was provided as a prompt and the remaining text was generated by a machine (gasp!).
Our contributions to the Commission d’accès à l’information du Québec (CAIQ) consultation
Our team spent many hours to put together technical and legal recommendations for the Commission d’accès à l’information du Québec consultation, with a focus on the protection of personal data as it relates to the impacts of AI.
For the entire report along with a summary of the recommendations, read here.
Research:
Let's look at some highlights of research papers that caught our attention at MAIEI:
Maximizing Privacy and Effectiveness in COVID-19 Apps by the OpenMined Team
This highly insightful work from the OpenMined team led by Andrew Trask provides a great amount of technical detail when trying to build an AI-enabled application to combat COVID-19. Along with articulating user and government needs from such an application, it provides necessary rationale for considerations to keep in mind when it comes to balancing privacy with usefulness of data to be provided to the health authorities as they strive to mitigate the spread of the epidemic. Giving details on the concepts of differential privacy, private set interactions, and private information servers, the article maps the importance of these techniques to meet the needs of privacy preservation in the collection and analysis of historical and current absolute location data, historical and current relative data and verified group identity information. These together help users achieve the goals of getting proximity alerts, exposure alerts, information on planning trips, symptom analysis and demonstration of proof of health. On the other hand they enable governments and health authorities to meet their goals of fast contact tracing, high-precision self-isolation requests, high-precision self-isolation estimation, high-precision symptomatic citizen estimation and demonstration of proof of health. All of these steps will help minimize negative economic impacts while accelerating the return of society to normalcy. We encourage people seeking to develop solutions or those responsible for verifying whether solutions respect the fundamental rights of citizens to read through our summary and the full article from OpenMined to gain a comprehensive understanding of the issues and potential solutions in building applications.
To delve deeper, read our full summary here.
A Focus on Neural Machine Translation for African Languages by Laura Martinus and Jade Abbott
Human expression has such diversity, an exemplar of which is the sheer number of languages with different semantic and syntactic rules. A predominant part of the knowledge base on the internet is in English, which hinders participation from all parts of the world in contributing to and consuming scientific and other information, especially in areas where English is not widely utilized. Manual translation efforts, often by governments and other non-profit organizations certainly aid in making information more accessible but it falls short on being able to do so for the entire e corpus of information on the Internet. That’s where machine translation can help and this paper makes a great contribution to doing so for low-resourced African languages with a specific focus on the official languages of South Africa. The paper provides an overview of some of the challenges that these languages face and why there hasn’t been much movement in meaningful translation efforts, primarily because of lack of comparable work due to unavailability of code and data, benchmarks and public leaderboards and small and poor quality datasets.
The authors use the frequently used ConvS2S and Transformer architectures with default hyperparameter settings to establish benchmarks that they intend to improve upon through tuning and better datasets in the future. One of the key findings from their analysis was that the performance of the models depended a lot on the size and quality of the dataset and the morphological typology of the language itself. They utilized the Autshumato datasets which have parallel, sentence-aligned corpora for several languages with English equivalents. They found that the Transformer architecture had better performance in general for all languages and for languages with smaller datasets, lower byte pair encodings (BPE) tokens led to higher BLEU scores.
Ultimately, this work serves to establish a starting point for future research work which would involve collection of more datasets to cover the other official languages of South Africa and experimenting with unsupervised learning, meta-learning and zero-shot techniques.
To delve deeper, read our full summary here.
Beyond a Human Rights Based Approach To AI Governance: Promise, Pitfalls and Plea by Nathalie A. Smuha
This paper explores how human rights can form a solid foundation for the development of AI governance frameworks but it cautions against an over-reliance on them for making decisions on how to structure the framework and decide on its actual components. The author highlights how the EC Trustworthy AI guidelines successfully utilized a human rights foundation to advocate for building legal, ethical and robust AI systems. While moral objectivism might seem like a great idea for creating a universal framework, there remains value in looking at it from a relativistic perspective where nuances of culture and context of different places can be adequately represented such that the proposed framework is more in line with the expectations of the people living in that jurisdiction. Arguments against using human rights are centered on them being too Western, individualistic and abstract but the author provides adequate justification for how those are weak arguments and that in fact the most often cited problem with human rights being abstract is a boon in that they can be applied to novel circumstances as well without much modification though they are subject to interpretation. With sufficient exercise of those principles, they often get enshrined in law as rules, which though can be inflexible, still offer concrete guidance that can serve as constituent parts to an AI governance framework. The paper also posits that it will be important for people in both law and technology to know the specificities of each other’s domains better to build frameworks that are meaningful and practical.
To delve deeper, read our full summary here.
Articles:
Let’s look at highlights of some recent articles that we found interesting at MAIEI:
As Coronavirus Surveillance Escalates, Personal Privacy Plummets
Regional governments are being imbued with escalated powers to override local legislations in an effort to curb the spread of the virus. The article provides details on efforts by various countries across the world, yet we only have preliminary data on the efficacy of each of those measures and we require more time before being able to judge which of them is the most effective. That said, in a pandemic that is fast spreading, we don't have the luxury of time and must make decisions as quickly as possible using the information at hand, perhaps using guidance from prior crises. But, what we've seen so far is minimal coordination from agencies across the world and that's leading to ad-hoc, patchy data use policies that will leave the marginalized more vulnerable. Strategies that use public disclosure of those that have been tested positive in the interest of public health are causing harm to the individuals and other individuals that are close to them such as their families. As experienced by a family in New York, online vigilantes attempted to harass the individuals while their family pleaded and communicated measures that they had taken to isolate themselves to safeguard others. Unfortunately, the virus might be bringing out the worst in all of us.
This Dating App Exposes the Monstrous Bias of Algorithms
Most of us have a nagging feeling that we’re being forced into certain choices when we interact with each other on various social media platforms. But, is there a way that we can grasp that more viscerally where such biases and echo chambers are laid out bare for all to see? The article details an innovative game design solution to this problem called Monster Match that highlights how people are trapped into certain niches on dating websites based on AI-powered systems like collaborative filtering. Striking examples of that in practice are how your earlier choices on the platform box you into a certain category based on what the majority think and then recommendations are personalized based on that smaller subset. What was observed was that certain racial inequalities from the real world are amplified on platforms like these where the apps are more interested in keeping users on the platform longer and making money rather than trying to achieve the goal as advertised to their users. More than personal failings of the users, the design of the platform is what causes failures in finding that special someone on the platform. The creators of the solution posit that through more effective design interventions, there is potential for improvement in how digital love is realized, for example, by offering a reset button or having the option to opt-out of the recommendation system and instead relying on random matches. Increasingly, what we’re going to see is that reliance on design and other mechanisms will yield better AI systems than purely technical approaches in improving socially positive outcomes.
How to know if artificial intelligence is about to destroy civilization
For a lot of people who are new to the field of artificial intelligence and especially AI ethics, they see existential risk as something that is immediate. Others dismiss it as something to not be concerned about at all. There is a middle path here and this article sheds a very practical light on that. Using the idea of canaries in a coal mine, the author goes on to highlight some potential candidates for a canary that might help us judge better when we ought to start paying attention to these kinds of risks posed by Artificial General Intelligence systems. The first one is the automatic formulation of learning problems, akin to how humans have high-level goals that they align with their actions and adjust them based on signals that they receive on the success or failure of those actions. AI systems trained in narrow domains don’t have this ability just yet. The second one mentioned in the article is achieving fully autonomous driving, which is a good one because we have lots of effort being directed to make that happen and it requires a complex set of problems to be addressed including the ability to make real-time, life-critical decisions. AI doctors are pointed out as a third canary, especially because true replacement of doctors would require a complex set of skills spanning the ability to make decisions about a patient’s healthcare plan by analyzing all their symptoms, coordinating with other doctors and medical staff among other human-centered actions which are currently not feasible for AI systems. Lastly, the author points to the creation of conversation systems that are able to answer complex queries and respond to things like exploratory searches, challenges of which are presented in the research summary article from last week. We found the article to put forth a meaningful approach to reasoning about existential risk when it comes to AI systems.
Empathy Machine: Humans Communicate Better after Robots Show Their Vulnerable Side
With more and more of our conversations being mediated by AI-enabled systems online, as covered in one of our research summaries, it is important to see if robots can be harnessed to affect positive behaviour change in our interactions with each other. While there have been studies that demonstrate the positive impact that robots can have on influencing individual behaviour, this study highlighted how the presence of robots can influence human to human interactions as well. What the researchers found was that having a robot that displayed positive and affective behavior triggered more empathy from humans towards other humans as well as other positive behaviors like listening more and splitting speaking time amongst members more fairly. This is a great demonstration of how robots can be used to improve our interactions with each other. Another researcher pointed out that a future direction of interest would be to see how repeated exposure to such robot interactions can influence behaviour and if the effects so produced would be long-lasting even in the absence of the robot to participate in the interactions.
Catherine D’Ignazio: 'Data is never a raw, truthful input – and it is never neutral’
The article presents the idea of data feminism which is described as the intersection between feminism and data practices. The use of big data in today's dominant paradigm of supervised machine learning lends itself to large asymmetries that reflect the power imbalances in the real world. The authors of the new book Data Feminism talk about how data should not just speak for itself, for behind the data, there are a large number of structures and assumptions that bring it to the stage where they are collated into a dataset. They give examples of how sexual harassment numbers, while mandated to be reported to a central agency from college campuses might not be very accurate because they rely on the atmosphere and degree of comfort that those campuses promote which in turn influences how close the reported numbers will be to the actual cases. The gains and losses from the use of big data are not distributed evenly and the losses disproportionately impact the marginalized. There are a number of strategies that can be used to mitigate the harms from such flawed data pipelines. Not an exhaustive list but it includes the suggestion of having more exposure for technical students to the social sciences and moving beyond having just a single ethics class as a check mark for having educated the students on ethics. Secondly, having more diversity in the people developing and deploying the AI systems would help spot biases by asking the hard questions about both the data and the design of the system. The current COVID-19 numbers might also suffer from similar problems because of how medical systems are structured and how people who don't have insurance might not utilize medical facilities and get themselves tested thus creating an underrepresentation in the data.
Study: ‘Accuracy nudge’ could curtail COVID-19 misinformation online
When it comes to disinformation spreading, there isn't a more opportune time than now with the pandemic raging where people are juggling several things to manage and cope with lifestyle and work changes. This has increased the susceptibility of people to sharing news and other information about how to protect themselves and their loved ones from COVID-19. As the WHO has pointed out, we are combating both a pandemic and an infodemic at the same time. What's more important is that this might be the time to test out design and other interventions that might help curb the spread of disinformation. This study highlighted how people shared disinformation more often than they believed its veracity. In other words, when people share content, they care more about what they stand to gain (social reward cues) for sharing the content than whether the content they’re sharing is accurate or not. To combat this, the researchers embarked on an experiment to see if asking users to check whether something was true before sharing — a light accuracy nudge, would change their behaviour. While there was a small positive effect in terms of them sharing disinformation less when prompted to check for accuracy, the researchers pointed out that the downstream effects could be much larger because of the amplification effects of how content propagates on social media networks. It points to a potentially interesting solution that might be scalable and could help fight against the spread of disinformation.
From the archives:
Here’s an article from our blogs that we think is worth another look:
Northern Frontier: In conversation with Abhishek Gupta
From the future of work to bias and transparency, responsible AI is gaining traction. As we embed machine learning-based solutions into our core systems and rely on their predictive potential to make life-changing decisions, our responsibility to make the right choices around this very human-driven series of technologies is now both undeniable and urgent.
Northern Frontier sat down with Abhishek Gupta, Founder of the Montreal AI Ethics Institute, to dive into some of the key themes of the day, including the threat automation poses to job loss based on the current science, whether bias is the biggest problem we face in responsible AI, and what we should consider reasonable trade-offs for improving fairness.
Watch the video interview here.
Guest contributions:
We invite researchers and practitioners working in different domains studying the impacts of AI-enabled systems to share their work with the larger AI ethics community, here’s this week’s featured post:
On the Construction of Artificial Moral Agents Agents By Dante Fasulo (Philosophy, McGill University)
This paper argues that humanity, as it stands today, should not develop artificial intelligence with the intent to produce an artificial moral agent—that is, if we are to continue constructing machines, they should not be designed with the foundations for a morality to evolve; machines ought to remain Artificial Agents (AA), rather than be upgraded to Artificial Moral Agents (AMAs). This includes an explanation of moral agency and what that means for AMAs, and whether we have a moral obligation to create moral life—beings with moral agency as opposed to just sentient beings, assuming we can. Also discussed are the risks of creating an AMA, possible mitigation techniques. Lastly, the potential for the AMA to become a leader of humanity, and its implications.
If you’re working on something interesting and would like to share that with our community, please email us at support@montrealethics.ai
Events:
As a part of our public competence building efforts, we host events frequently spanning different subjects as it relates to building responsible AI systems, you can see a complete list here: https://montrealethics.ai/meetup
Given the advice from various health agencies, we’re avoiding physical events to curb the spread of COVID-19. Stay tuned for updates!
From elsewhere on the web:
Things from our network and more that we found interesting and worth your time.
COVID-19 & AI - Privacy & Ethical Considerations
As people rush to deploy AI-enabled solutions for fighting COVID-19, we wrote in the RE-WORK blog about how people can go about doing this while keeping ethical considerations in mind.
Complement this with our research summary on technical considerations for respecting fundamental rights from the team at OpenMined.
Signing off for this week, we look forward to it again in a week! If you enjoyed this and know someone else that can benefit from this newsletter, please share it with them!
If you have feedback for this newsletter or think there is an interesting piece of research, development or event that we missed, please feel free to email us at support@montrealethics.ai
If someone has forwarded this to you and you like what you read, you can subscribe to receive this weekly newsletter by clicking below