AI Ethics #25: AI for the US Gov, power and bias in NLP, multimodal disinformation, temporal bias, AI in healthcare and more ...
Fighting cronyism with algorithms, political autocomplete, Portland's regulation and Amazon, what is wrong with The Social Dilemma, and more from the world of AI Ethics!
Welcome to the twenty-fifth edition of our weekly newsletter that will help you navigate the fast changing world of AI Ethics! Every week we dive into research papers that caught our eye, sharing a summary of those with you and presenting our thoughts on how it links with other work in the research landscape. We also share brief thoughts on interesting articles and developments in the field. More about us on: https://montrealethics.ai/about/
If someone has forwarded this to you and you want to get one delivered to you every week, you can subscribe to receive this newsletter by clicking below:
Summary of the content this week:
In research summaries this week, we look at AI’s promises and perils for the US Government, how language (technology) is power taking a critical look at “bias” in NLP systems, and the effects and mechanisms of multimodal disinformation and rebuttals on social media.
In article summaries this week, we look at the problem of blaming technology for larger societal issues in relation to lynchings in India, how algorithms can help to fight cronyism in French daycares, why Uber wasn’t charged in the fatal self-driving incident, some of the glitches in Google’s protective measures to remove political leanings in its autocomplete features, why and how Amazon tried to thwart the Portland regulation on Facial Recognition Technology, and how The Social Dilemma fails to tackle the real issues in technology.
In featured work from our staff this week, we take a look at what continual learning and co-design of curricula can do for the future of education, the concept of temporal bias and how that might be shaping the AI agenda, and what it means for the use of AI in healthcare.
In upcoming events, we will be hosting a workshop on Guidelines for Third-Party Ethics Reviews in partnership with AI Global. Scroll to the bottom of the email for more information.
MAIEI Learning Community:
Interested in working together with thinkers from across the world to develop interdisciplinary solutions in addressing some of the biggest ethical challenges of AI? Join our learning community; it’s a modular combination of reading groups + collaborating on papers. Fill out this form to receive an invite!
AI Ethics Concept of the week: ‘Accountability’
Commonly, AI companies insist that the black-box nature of their AI makes it so that they don't have any insight into an AI's decision making processes and therefore shouldn't be held accountable for those decisions and the consequences. We explore why that can be problematic.
Learn about the relevance of accountability to AI ethics and more in our AI Ethics Living dictionary. 👇
Explore the Living Dictionary!
Consulting on AI Ethics by the research team at the Montreal AI Ethics Institute
In this day and age, organizations using AI are expected to do more than just create captivating technologies that solve tough social problems. Rather, in today’s market, the make or break feature is whether organizations using AI espouse concepts that have existed since time immemorial, namely, principles of morality and ethics.
The Montreal AI Ethics Institute wants to help you ‘make’ your AI organization. We will work with you to analyze your organization and/or product offerings for moral and ethical weaknesses and make sure your business model is air tight. By undergoing a thorough, multidisciplinary review of your AI tool, we will provide ethical, social, and technical feedback on your work and research, which will allow you to proactively address your own blindspots and maximize your potential before ever undergoing a third party ethics review.
To find out more, please take a look at this page.
Research:
Let's look at some highlights of research papers that caught our attention at MAIEI:
Policy Brief: AI’s Promise and Peril for the U.S. Government by David Freeman Engstrom, Daniel E. Ho, Catherine M. Sharkey, and Mariano-Florentino Cuéllar
Having studied multiple federal agencies across the U.S., the authors have produced a list of 5 main findings from their research. Ranging from the uptake of Artificial Intelligence (AI) systems in government to the ability of AI to exacerbate social inequalities if mismanaged, the authors produce a concise and clear summary of the effects of AI on the U.S. government. How the government acts to instil the norms of legal explainability, non-discrimination and transparency will shape how the AI future of the U.S. is defined.
To delve deeper, read our full summary here.
Language (Technology) is Power: A Critical Survey of “Bias” in NLP by Su Lin Blodgett, Solon Barocas, Hal Daumé III, Hanna Wallach
With the recent boom in scholarship on Fairness and Bias in Machine Learning, several competing notions of bias and different approaches to mitigate their impact have emerged. This incisive meta-review from Blodgett et al dissects 146 papers on Bias in Natural Language Processing (NLP) and identifies critical discrepancies in motivation, normative reasoning and suggested approaches. Key findings from this study include mismatched motivations and interventions, a lack of engagement with relevant literature outside of NLP and overlooking the underlying power dynamics that inform language.
To delve deeper, read our full summary here.
What we are thinking:
Op-eds and other work from our research staff that explore some of the most pertinent issues in the field of AI ethics:
Photo by Günter Valda on Unsplash
AI Has Arrived in Healthcare, but What Does This Mean? by Connor Wright
This is joint work between the Montreal AI Ethics Institute and Fairly.AI where Connor is working as an intern
One of Artificial Intelligence’s (AI’s) main attractions from the very beginning was its potential to revolutionize healthcare for the better. However, while taking steps towards this goal, the implementation of AI in healthcare is not without its challenges. In this discussion, I delineate the current situation surrounding the use of AI in healthcare and the efforts by regulatory bodies such as the FDA to mitigate this emerging field.
I explore how this potential regulation may send the wrong signal to manufacturers (and best practices on how to make this process easier) and how while there have been AI-powered healthcare systems approved, this is by no means the beginning of a mass overhaul of the medical environment. I nevertheless maintain the positivity of how these approved applications are augmentative in nature and aren’t out to replace human medical practitioners. There are new signals being sent out through the arrival of AI in healthcare, but they are not to be frightened of.
To delve deeper, read the full article here.
The Co-Designed Post-Pandemic University: A Participatory and Continual Learning Approach for the Future of Work by Abhishek Gupta and Connor Wright
The pandemic has shattered the traditional enclosures of learning. The post-pandemic university (PPU) will no longer be contained within the 4 walls of a lecture theatre, and finish once students have left the premises. The use of online services has now blended home and university life, and the PPU needs to reflect this. Our proposal of a continuous learning model will take advantage of the newfound omnipresence of learning, while being dynamic enough to continually adapt to the ever-evolving virus situation. Universities restricting themselves to fixed subject themes that are then forgotten once completed, will miss out on the ‘fresh start’ presented by the virus.
To delve deeper, read the full article here.
The Unnoticed Cognitive Bias Secretly Shaping the AI Agenda by Camylle Lanteigne, AI Ethics Researcher and Research Manager at MAIEI and Ethics Analyst, Algora Lab
This explainer was written in response to colleagues’ requests to know more about temporal bias, especially as it relates to AI ethics.It begins with a refresher on cognitive biases, then dives into: how humans understand time, time preferences, present-day preference, confidence changes, planning fallacies, and hindsight bias.
To delve deeper, read the full article here.
Articles:
Let’s look at highlights of some recent articles that we found interesting at MAIEI:
India’s Lynching Epidemic and the Problem With Blaming Tech (The Atlantic)
We’ve previously covered how bias and other problems in algorithmic systems aren’t just technical problems and require a more in-depth understanding of the surrounding socio-technical ecosystem within which the technologies are deployed. In the case of the proliferation of technologies like Whatsapp in places like India, it is easy to jump to conclusions on how violence can be mobilized quite easily in an online context through forwarded messages that translate into real-world harm.
An important consideration here is to think about technology as being merely an enabler that amplifies the problems that the community faces, a lot of the violence in the Indian context can be attributed to a distrust in the governmental mechanisms and hence people taking justice into their own hands and mobilizing support through platforms like Whatsapp. A historical analysis reveals that the number of violent incidents has been affected significantly through higher levels of communication. Perhaps we are more aware of them now since smaller incidents and distant incidents are also able to pop up on our radar, but research points to the absolute numbers staying about the same.
And it is not just new-fangled tools like Whatsapp that can be put to blame, in other places SMS has also been blamed to be a “weapon of war”, though with Whatsapp, the potential to use richer media than just black-and-white text does have ramifications. While technological interventions, for example limiting the number of people to whom you can forward messages at a time does serve as a starting point, laying all the blame on the technological tools erodes the responsibility and actions that we need to take to affect larger societal changes that can help us build a more peaceful and just society.
In French daycare, algorithms attempt to fight cronyism (Algorithm Watch)
When they say that algorithms have penetrated all aspects of our lives, this isn’t far from the truth. Daycare allocations, something highly prized by parents in places like Paris, France being replaced by algorithmic systems have the potential to crack open obfuscated selection criteria that are opaque and lead to cronyism. Transparency in public processes is always something that we should strive for, especially when it comes to how decisions are made about different aspects of people’s lives that have the potential to affect them significantly, for example, the care of their children.
Codifying existing criteria that are used and acted upon by people who previously sat in committees to make allocations are now done in a more transparent way by the machine and utilizing something called the student optimal fair matching algorithm which ensures that no student who prefers a school to her outcome will be rejected while another student with lower priority is matched to the school.
A complaint raised by the people who previously held the positions that made this allocation on paper by hand is that the algorithm might not be able to grasp the nuances of each family that is seeking this service. But, if we can combat cronyism and offer higher levels of transparency, then the outcomes, both quantitatively and qualitatively, from the perspective of the parents should be the final driver in deciding how successful the program has been.
Why Wasn't Uber Charged in a Fatal Self-Driving Car Crash? (Wired)
It would appear that we can’t go about the conversation on autonomous vehicles (AVs) without talking about the fatal car crash that happened when a safety driver in the Uber vehicle in Arizona took their eyes off and hit a pedestrian. And rightly so! Going into the future, if such AVs are to become a common sight, we need to have strong precedents for handling cases and inadvertent accidents that will arise. But, does the way that this Uber crash was dealt with, set such a precedent?
The National Transportation Safety Board (NTSB) did a detailed analysis and found many parties at fault in the accident. In harrowing detail, the report talks about how the major culprit was that the Uber self-driving system didn’t account for the possibility of pedestrians outside of the designated crosswalks, as well as a misclassification error and confusion until a mere 1.2 seconds before the collision. When the alarm was finally sounded for the safety driver, perhaps there was too little time left for the safety driver to act.
But, fully laying the blame on the safety driver as is being advocated leads to a false separation of concern where there are multiple folks who need to be held accountable for the final outcome and liability must be shared. Yet, once the hearings proceed and the dust settles, we might have created the precedent that will set the stage for any future collisions and how they are dealt with.
Google's Autocomplete Ban on Politics Has Some Glitches (Wired)
The election is almost here, it would be impossible to have an edition of this newsletter without at least some mention of the state of our information ecosystem, be that rampant disinformation proliferating on social media or how filter bubbles are going to create a hyper-partisan atmosphere that is going to push people to ideological extremes.
Google, the default search engine for billions of people across the world, is taking steps to ensure that falsehoods and bias are as far removed as possible from search results. But, how successful that is going to be remains to be seen. For starters, one of the things that we need to consider seriously is the role that auto-complete plays in how people search for and navigate issues. The helpful dropdowns while useful when you’re searching for innocuous things like patio furniture have much more serious implications when they guide and nudge you towards donating towards one campaign vs. another or to surface more negative coverage about one candidate vs. another.
While reports have been filed with Google to not skew in any one direction, as reported by Wired in this article, it is clear that for some things actions have been taken by Google but for others, that doesn’t seem to be the case based on what their perception of the issue is. For example, when talking about Black Lives Matter, some factions might consider this to be a partisan issue and ask for auto-complete to be altered for it while others will talk about it in a non-partisan manner making waters murky. For now, surfacing places where there are problems and flagging them is our best approach though the experiences of everyone are not consistent because of how search results are personalized based on your own search history and interactions on the web.
Why Amazon tried to thwart Portland's historic facial recognition ban (Salon)
Portland has jumped front and center in the battle for regulating facial recognition technology and how we think about this going into the future. While they restrict the use of this in areas that are open to the public, other places have taken less restrictive approaches to regulating facial recognition technology.
Given that some organizations can stand to lose a lot of money if the demand for their systems goes down amidst new regulations, it shouldn’t come as a surprise to the readers of this newsletter that there were attempts to subvert the efforts made by Portland in regulating this technology. However, bringing about transparency into where interventions are made by firms that lobby to thwart such regulatory efforts and other initiatives that seek to dismantle such efforts are much needed.
Despite Portland creating a historic push on this front, there remain several holes, especially as it relates to how such technologies are used in settings like public schools which fall under different bodies and until we get a consistent and coherent regulatory mechanism in place, the patchy efforts are going to continue to place people in harm’s way leaving them with few options for recourse.
The Social Dilemma Fails to Tackle the Real Issues in Tech (Slate)
If you haven’t watched The Social Dilemma yet, it is perhaps worth at least some of your time to give it a look. It does get some things right, and again a lot of those things might not come as surprises or novel information to the readers of this newsletter but at least it serves as a wake-up call for a lot of other people who are still new to the idea of the invasiveness and violations of privacy and other issues that are rampant in social media platforms.
But, as pointed out in this article, there is something that is deeply problematic about a documentary like this which valorizes the “woke” technology workers who get an “easy ride to redemption” while others who have been laboring for years, doing the hard work of mobilizing communities have been largely ignored. The documentary perpetuates the problem of biases and lack of diversity, things that had the companies from which the interviewees hail had taken more seriously, at least some of the problems would have been mitigated.
While yes the design patterns used in the creation of this technology can be cast as something that is meant to solicit addictive behaviour, this doesn’t mean that a singular, pathological framing is adequate in how these issues need to be thought about. Finally, as we mentioned, continuing to offer a spotlight to those who already had a seat at the table, who blew their chance at making a difference, at the expense of those who have always been marginalized is something that is deeply concerning, especially as more people become aware of these issues and think that these now “enlightened” technology workers are the avenues for change when there are intrepid and indefatigable workers who have been working on these issues for years.
From elsewhere on the web:
Things from our network and more that we found interesting and worth your time.
AI Ethics groups are repeating one of society’s classic mistakes by Abhishek Gupta and Victoria Heath for the MIT Technology Review
"The problem: AI systems have repeatedly been shown to cause problems that disproportionately affect marginalized groups while benefiting a privileged few. The global AI ethics efforts underway today—of which there are dozens—aim to help everyone benefit from this technology, and to prevent it from causing harm. These groups are well-intentioned and are doing worthwhile work.
However… Without more diverse geographic representation, they’ll produce a global vision for AI ethics that reflects the perspectives of people in only a few regions of the world, particularly North America and northwestern Europe. If unaddressed, they risk developing standards that are, at best, meaningless and ineffective across all the world’s regions. At worst, these flawed standards will lead to more AI systems and tools that perpetuate existing biases and are insensitive to local cultures."
To delve deeper, read the full article here.
The history of science and technology shows that seemingly innocuous developments in scientific theories and research have enabled real-world applications with significant negative consequences for humanity. In order to ensure that the science and technology of AI is developed in a humane manner, we must develop research publication norms that are informed by our growing understanding of AI's potential threats and use cases. Unfortunately, it's difficult to create a set of publication norms for responsible AI because the field of AI is currently fragmented in terms of how this technology is researched, developed, funded, etc. To examine this challenge and find solutions, the Montreal AI Ethics Institute (MAIEI) collaborated with the Partnership on AI in May 2020 to host two public consultation meetups. These meetups examined potential publication norms for responsible AI, with the goal of creating a clear set of recommendations and ways forward for publishers.
In its submission, MAIEI provides six initial recommendations, these include: 1) create tools to navigate publication decisions, 2) offer a page number extension, 3) develop a network of peers, 4) require broad impact statements, 5) require the publication of expected results, and 6) revamp the peer-review process. After considering potential concerns regarding these recommendations, including constraining innovation and creating a "black market" for AI research, MAIEI outlines three ways forward for publishers, these include: 1) state clearly and consistently the need for established norms, 2) coordinate and build trust as a community, and 3) change the approach.
To delve deeper, read the full report here.
A very interesting workshop related to the above idea organized by some of the same folks behind the publication norms work from Partnership on AI among others titled Navigating the Broader Impacts of AI Research that will be taking a look at:
Mechanisms of ethical oversight in AI research
Challenges of AI research practice and responsible publication
Collective and individual responsibility in AI research
Anticipated risks and known harms of AI research
Our founder, Abhishek Gupta, is co-organizing the following workshop that is now accepting submissions: The ML-Retrospectives, Surveys & Meta-Analyses @ NeurIPS 2020 Workshop is about reflecting on machine learning research. This workshop is a new edition of the previous Retrospectives Workshops at NeurIPS’19 and ICML’20 respectively. While earlier the focus of the workshop was primarily on Retrospectives, this time the focus is on surveys & meta-analyses. The enormous scale of research in AI has led to a myriad of publications. Surveys & Meta-Analyses meet the need of taking a step back and looking at a sub-field as a whole to evaluate actual progress. However, we will also accept retrospectives.
In conjunction with NeurIPS, the workshop will be held virtually. Please see our schedule for details.
To delve deeper, take a look at the workshop website here.
From the archives:
Here’s an article from our blogs that we think is worth another look:
A Picture Paints a Thousand Lies? The Effects and Mechanisms of Multimodal Disinformation and Rebuttals Disseminated via Social Media by Michael Hameleers, Thomas E. Powell, Toni G.L.A. Van Der Meer & Lieke Bos
In the current information environment, fake news and disinformation are spreading and solutions are needed to contrast the effects of the dissemination of inaccurate news and information. In particular, many worry that online disinformation – intended as the intentional dissemination of false information through social media – is becoming a powerful persuasive tool to influence and manipulate users’ political views and decisions.
Whereas so far research on disinformation has mostly focused on only textual input, this paper taps into a new line of research by focusing on multimedia types of disinformation which include both text and images. Visual tools may represent a new frontier for the spread of misinformation because they are likely to be perceived as more ‘direct’ representations of reality. Accordingly, the current hypothesis is that multimedia information will be more readily accepted and believed than merely textual inputs. And since now images can be easily manipulated, the worry that animates this research is that they will constitute a very powerful tool in future disinformation-campaigns. Therefore, the primary goals of this paper are (1) to investigate the persuasive power of multimedia online disinformation in the US and (2) to study the effects of journalistic debunking tools against multimedia disinformation.
To delve deeper, read the full article here.
Guest contributions:
If you’ve got an informed opinion on the impact of AI on society, consider writing a guest post for our community — just send your pitch to support@montrealethics.ai. You can pitch us an idea before you write, or a completed draft.
Events:
As a part of our public competence building efforts, we host events frequently spanning different subjects as it relates to building responsible AI systems, you can see a complete list here: https://montrealethics.ai/meetup
MAIEI Consultation: Guidelines for Third Party Ethics Review
October 6, 11:45 AM - 1:15 PM ET (Online)
You can find all the details on the event page, please make sure to register as we have limited spots (because of the online hosting solution).
Signing off for this week, we look forward to it again in a week! If you enjoyed this and know someone else that can benefit from this newsletter, please share it with them!
If you have feedback for this newsletter or think there is an interesting piece of research, development or event that we missed, please feel free to email us at support@montrealethics.ai