AI Ethics #26: Whiteness of AI, frontiers of fairness, aging in an era of fake news, politics of adversarial machine learning, protect yourself from misinformation, and more ...
Being watched while taking an exam, first-party tracking, insuring the future of work, what does it mean to be a bot, and more from the world of AI Ethics!
Welcome to the twenty-sixth edition of our weekly newsletter that will help you navigate the fast changing world of AI Ethics! Every week we dive into research papers that caught our eye, sharing a summary of those with you and presenting our thoughts on how it links with other work in the research landscape. We also share brief thoughts on interesting articles and developments in the field. More about us on: https://montrealethics.ai/about/
If someone has forwarded this to you and you want to get one delivered to you every week, you can subscribe to receive this newsletter by clicking below:
Summary of the content this week:
In research summaries this week, the whiteness of AI, a snapshot of the frontiers of fairness in machine learning, aging in an era of fake news, and the politics of adversarial machine learning.
In article summaries this week, we cover some recommendations on how you can protect yourself from misinformation online, the feeling of being watched while taking an exam, scanning social media websites using Blacklight to discover the extent of first-party tracking, insuring the future of work, why security experts are bracing for the next hack-and-leak, and the implications of whether someone is a bot or not.
In featured work from our staff this week, we take a look at what continual learning and co-design of curricula can do for the future of education, the concept of temporal bias and how that might be shaping the AI agenda, and what it means for the use of AI in healthcare.
In upcoming events, we will be hosting a workshop on Guidelines for Third-Party Ethics Reviews in partnership with AI Global. Scroll to the bottom of the email for more information.
MAIEI Learning Community:
Interested in working together with thinkers from across the world to develop interdisciplinary solutions in addressing some of the biggest ethical challenges of AI? Join our learning community; it’s a modular combination of reading groups + collaborating on papers. Fill out this form to receive an invite!
AI Ethics Concept of the Week: ‘Reinforcement Learning’
Reinforcement learning is a machine learning technique that teaches AI using positive and/or negative reinforcement. This technique is used to teach AI to play games or even drive a car!
Learn about the relevance of reinforcement learning to AI ethics and more in our AI Ethics Living dictionary. 👇
Explore the Living Dictionary!
Consulting on AI Ethics by the research team at the Montreal AI Ethics Institute
In this day and age, organizations using AI are expected to do more than just create captivating technologies that solve tough social problems. Rather, in today’s market, the make or break feature is whether organizations using AI espouse concepts that have existed since time immemorial, namely, principles of morality and ethics.
The Montreal AI Ethics Institute wants to help you ‘make’ your AI organization. We will work with you to analyze your organization and/or product offerings for moral and ethical weaknesses and make sure your business model is air tight. By undergoing a thorough, multidisciplinary review of your AI tool, we will provide ethical, social, and technical feedback on your work and research, which will allow you to proactively address your own blindspots and maximize your potential before ever undergoing a third party ethics review.
To find out more, please take a look at this page.
Research:
Let's look at some highlights of research papers that caught our attention at MAIEI:
The Whiteness of AI by Stephen Cave and Kanta Dihal
Robots and other artificially intelligent machines are often depicted as white. Search “robot” in Google Images and you’ll notice it immediately. This often unnoticed phenomenon is problematic and an example of how technology can be racialized. Authors Stephen Cave and Kanta Dihal from the University of Cambridge explain, “intelligent machines are predominately conceived and portrayed as White. We argue that this Whiteness both illuminates particularities of what (Anglophone Western) society hopes for and fears from these machines, and situates these affects within long-standing ideological structures that relate race and technology.”
In this paper, the authors (Cave and Dihal) examine “how the ideology of race shapes connections and portrayals of artificial intelligence (AI),” contributing to an increasing number of studies and books looking at the connections between race and technology (e.g. Safiya Noble’s Algorithms of Oppression). Their discussion is informed and framed by the philosophy of race and critical race theory, relying on Black feminist theories and work in Whiteness studies. More specifically, they utilize the “white racial frame” developed by Joe R. Feagin (2006) to examine representations of AI (e.g. white robots). By drawing attention to the “the operation of Whiteness” in technology, which is often normalized and made invisible, we can expose the “myth of colour-blindness” that is prevalent in tech culture that prevents “serious interrogation of racial framing” in technology.
The authors offer three interpretations of the Whiteness of AI: 1) normalization of Whiteness in the Anglophone West, 2) the White racial frame primarily ascribes intelligence to White people, and 3) it allows for “a full erasure of people of color from the White utopian imaginary.” Beyond these three interpretations, they examine how this racialization of AI may lead to three specific representational harms: 1) amplify existing racial prejudices, 2) exacerbate injustices, and 3) distort society’s perception of the risks and benefits of AI. Cave and Dihal position this paper in a larger conversation and process called decolonizing AI, in which scholars aim to break down “the systems of oppression that arose with colonialism and have led to present injustices that AI threatens to perpetuate and exacerbate.”
To delve deeper, read our full summary here.
A Snapshot of the Frontiers of Fairness in Machine Learning by Alexandra Chouldechova and Aaron Roth
In this succinct review of the scholarship on Fair Machine Learning(ML), Chouldechova and Roth outline the major strides taken towards understanding algorithmic bias, discuss the merits and shortcomings of proposed approaches, and present salient open questions on the frontiers of Fair ML. These include- statistical vs individual notions of Fairness, the dynamics of fairness in socio-technical systems, and the detection and correction of algorithmic bias.
To delve deeper, read our full summary here.
Aging in an Era of Fake News by Nadia M. Brashier and Daniel L. Schacter
With the release of Netflix’s Social Dilemma, the upcoming U.S. elections, and persistent COVID-19 conspiracy theorists and deniers, online misinformation has resurfaced in the public debate serious threat to public safety and democracy. Lessons learned from the 2016 U.S. elections showed that older adults were the most prone to sharing fake news, with cognitive decline being the commonly cited explanation for this behaviour.
Brashier and Schacter argue that other factors such as greater trust, difficulty detecting lies, a lower emphasis on accuracy when communicating, and unfamiliarity with social media, are also to consider when accounting for how older generations evaluate news. Reducing fake news shares and increasing digital literacy among older adults is key to maintaining a healthy and informed civic society. Older adults had a 70.9% turnout at the last election compared to 46.1% among millennials. They merit more targeted strategies to effectively reduce the share of fake news online.
To delve deeper, read our full summary here.
What we are thinking:
Op-eds and other work from our research staff that explore some of the most pertinent issues in the field of AI ethics:
AI Has Arrived in Healthcare, but What Does This Mean? by Connor Wright
This is joint work between the Montreal AI Ethics Institute and Fairly.AI where Connor is working as an intern
One of Artificial Intelligence’s (AI’s) main attractions from the very beginning was its potential to revolutionize healthcare for the better. However, while taking steps towards this goal, the implementation of AI in healthcare is not without its challenges. In this discussion, I delineate the current situation surrounding the use of AI in healthcare and the efforts by regulatory bodies such as the FDA to mitigate this emerging field.
I explore how this potential regulation may send the wrong signal to manufacturers (and best practices on how to make this process easier) and how while there have been AI-powered healthcare systems approved, this is by no means the beginning of a mass overhaul of the medical environment. I nevertheless maintain the positivity of how these approved applications are augmentative in nature and aren’t out to replace human medical practitioners. There are new signals being sent out through the arrival of AI in healthcare, but they are not to be frightened of.
To delve deeper, read the full article here.
The Co-Designed Post-Pandemic University: A Participatory and Continual Learning Approach for the Future of Work by Abhishek Gupta and Connor Wright
The pandemic has shattered the traditional enclosures of learning. The post-pandemic university (PPU) will no longer be contained within the 4 walls of a lecture theatre, and finish once students have left the premises. The use of online services has now blended home and university life, and the PPU needs to reflect this. Our proposal of a continuous learning model will take advantage of the newfound omnipresence of learning, while being dynamic enough to continually adapt to the ever-evolving virus situation. Universities restricting themselves to fixed subject themes that are then forgotten once completed, will miss out on the ‘fresh start’ presented by the virus.
To delve deeper, read the full article here.
The Unnoticed Cognitive Bias Secretly Shaping the AI Agenda by Camylle Lanteigne, AI Ethics Researcher and Research Manager at MAIEI and Ethics Analyst, Algora Lab
This explainer was written in response to colleagues’ requests to know more about temporal bias, especially as it relates to AI ethics.It begins with a refresher on cognitive biases, then dives into: how humans understand time, time preferences, present-day preference, confidence changes, planning fallacies, and hindsight bias.
To delve deeper, read the full article here.
Articles:
Let’s look at highlights of some recent articles that we found interesting at MAIEI:
How to guard your social feeds against election misinformation (Vox)
Not a week can go by without us covering the state of our information ecosystem. And rightly so: we are in a US Presidential election year and there is so much at stake. This article provides some very handy tips for those who want to navigate this polluted information landscape relatively unscathed. Setting the stage up front in the article, we can’t rely on platform-led interventions since they are mostly focused on specific instances and handling them rather than sweeping changes that are required to clean up the state of misinformation on the platforms. As a starting point, users can move away from questionable sources by unfollowing them on these platforms. They can also be more careful of so-called trusted sources if they have been spewing misinformation.
For example, reliance on labels of whether something is true or not can be a way to navigate the system, it only means that you have some indicators pointing to the veracity of the content, it doesn’t always mean that the content itself will be taken down. Echo chambers confirming our beliefs are too powerful to resist and can overshadow the effectiveness of measures such as truth-labeling. Even though platforms like Facebook now prioritize content from your friends and families over those from third-party sources, it doesn’t stop that information from showing up in your feed when one of those connections share it on their pages.
To the credit of Facebook, they do have initiatives like “Why am I seeing this?” and interstitials that help to provide a bit more context around a piece of content, they might not be as widely used as we might imagine - akin to our aversion to reading terms and conditions prior to using software. Relying on markers like the verified checkmark doesn’t mean that the entity is an authority on a particular subject — it is just an indicator that they are who they claim to be. Finally, helping out people within your network who are less savvy, not by admonishing them, but by engaging in meaningful dialogue can also help to make our information ecosystem healthier.
Photo by Daniil Kuželev on Unsplash
How It Feels When Software Watches You Take Tests (NY Times)
While the benefits of digital technology have certainly helped to alleviate some of the concerns around education in a remote environment because of the pandemic, it isn’t a panacea and the effects are unevenly distributed. Specifically, those who faced struggles in a regular environment before are facing more aggravated versions of the same because automated solutions aren’t inclusive of differing needs. Solutions to proctor exams in a remote environment rely on many intrusive pieces of technology like continuous monitoring of faces, sound analysis in a room, looking at objects present in a room, whether there is someone else entering the room, etc.
For students who have accommodation needs that would typically be met by their educational institution, in a remote environment they are at the mercy of the company providing the software solution and this has led to some horrendous experiences as documented in this article. Students who have tics, who have to take care of siblings, who have darker skin, or any other traits that are not the “norm” on which the proctoring solutions were trained on lead to red flags that exacerbate the exam-taking experience for these students.
While these companies have been making amends as problems are being reported, it begs the question as to why educators and administrators weren’t consulted beforehand (or if they were, why some of these problems arise in the first place) that would have made for a more proactive rather than reactive approach. The educational system already places undue pressure on students who request such accommodations, to put them through the wringer where responsibility is outsourced to a third-party is especially unfortunate.
I Scanned My Favorite Social Media Site on Blacklight and It Came Up Pretty Clean. What’s Going On? (The Markup)
We have certainly enjoyed using the Blacklight tool from The Markup to scan some of the places where there is intrusive tracking being done by companies, at least ones that purport to provide ethical solutions and have found (un)surprisingly that they don’t fare so well. Some users have reported that they didn’t find anything wrong with some of the most flagrant violators and were surprised (rightfully so!) that there wasn’t any intrusive tracking on those websites.
In this article, the creators of the tool walk through what the tool is actually testing for and showcase to people, even veterans of the field, that it is not just third-party tracking that we need to be wary of, but also the first-party tracking that happens on these websites and how there is a morphing of some of the third-party tools into a form where they appear to be first-party such that they evade detection by tools such as Blacklight.
What the Blacklight tool does is to scan for data being sent to third parties — it can’t see what tracking is being done because it doesn’t log into your account, e.g. when you are scanning Facebook. The article uses the following analogy to explain further: Blacklight drives you around the parking lot of a store and see if there are any people in cars outside who are noting your activities but that doesn’t tell you anything about what a cashier at the store might be doing — say, noting down your credit card information. Giving some more technical details on the different types of cookies that Facebook uses, the article concludes with some information on how cookies are moving from third-party to first-party like behaviour limiting the efficacy of tools like Blacklight, which is certainly a blow to our ability to demand transparency from these companies.
Insuring the Future of Work (Stanford Social Innovation Review)
Providing some insights into the world of actuaries, this article emphasizes how some of the transformation in how we work will take place over the next few years. Specifically, early on it makes the important point that it will not just be the so-called soft-skills that will be essential to the success of workers in the future but also a baseline of technical skills too so that they are able to effectively work with new-fangled tools that permeate their domain. Taking from the example of actuarial work, data science has certainly boosted the ability of the actuaries to make more accurate forecasts and help pricing different instruments in the space.
Insurance is a domain that is heavily dependent on data, large quantities of it, in their operations in an asymmetric market where they want to be able to determine the risks appropriately so that they can manage them well. Utilizing automated techniques also has the benefit of cutting down on the expenses of such work in the sense that the firms can serve a larger number of customers with the same amount of human staff. This means lower costs for customers making some of these instruments potentially more accessible while also reducing the lead time for getting back decisions on their applications.
They also touch on the importance of having more consistent job descriptions that can help to better inform what gets taught in the education system and what competencies future workers focus on, ultimately creating a workforce that is more aligned with the fast-changing environment.
Why security experts are braced for the next election hack-and-leak (MIT Tech Review)
Building on the importance of having a cleaner information ecosystem, it is not a surprise when well-timed dumps of leaked documents and information can lead to problematic situations whereby the more important issues get sidelined by an exclusive focus on the sometimes banal (at least compared to the more important issues of the day) contents of these leaked documents. Given this vulnerability, newsrooms around the internet have been advising their reporters and journalists to be wary of this and apply caution and circumspection before adding more oxygen to the leaked documents.
A particular problem with this is that by nature they are anonymous and don’t provide avenues to hold actors accountable. While information from official sources is tightly regulated close to the election timeframe, leaked information from anonymous sources can’t be held to the same standards and can hence be used as an effective tactic to distort elections and their results.
France with the Macron leaks did provide a model perhaps that is worth contrasting with the US model whereby they showed restraint in what was released so as to not unduly influence the election. Ultimately, having good media literacy in terms of knowing when to trust which sources is going to be an increasingly essential skill that we need both consumers and producers to have.
Bot or Not (Real Life Mag)
Providing a fascinating background into the origin story of the CAPTCHA, the distorted letters that we have to identify to prove our humanness, and selecting boxes that contain specific objects in an image, this article showcases how these tollgates for discerning humans from machines might now be flipped and how the dynamics of what constitute bots online has changed since the conception of this “human-processing power” harnessing tool that was created many years ago.
The inventor of the CAPTCHA wanted to help curb the problem of spam and bots online and decided that a gamification approach would be one that is appropriate to incentivize participation and at the same time create microtasks which could be utilized for other purposes. Fast forward to today, they do serve as perfect training grounds for machine learning systems getting free labeling from humans to create datasets. But, of course as the machine learning systems have been getting better, these checks are no longer sufficient and researchers point to how such tests are merely decoys and the services that are verifying our humanness actually run more background risk score calculations by analyzing everything from your IP address, browser history, cookies, mouse movements, and other markers to ascertain whether you are a machine or not. If there is doubt after that evaluation, then you are presented with the 9 boxes over an image that acts as a further check.
The most interesting tidbit in this article centered around how CAPTCHAs could be used as gateways to access portals where there is prohibited activity or speech taking place. By identifying racist elements in a seemingly innocuous image, it might grant you access to a hidden community that discusses those ideas while a regular person would just pass through transparently none the wiser. In that sense, they can start acting like memes which have an in-group sensibility to them in that they are targeted to specific demographics but are veiled in broad daylight from those who don’t have all the nuance and context to decipher the hidden meaning. Finally, the way we interact online and how we police for what constitutes bot-like behaviour might itself evolve over time since this can just become an easy scapegoat to evade responsibility, relegating any bad actions to the background by blaming it on bots when in reality it could be humans just masquerading as bots.
From elsewhere on the web:
Things from our network and more that we found interesting and worth your time.
AI Ethics groups are repeating one of society’s classic mistakes by Abhishek Gupta and Victoria Heath for the MIT Technology Review
"The problem: AI systems have repeatedly been shown to cause problems that disproportionately affect marginalized groups while benefiting a privileged few. The global AI ethics efforts underway today—of which there are dozens—aim to help everyone benefit from this technology, and to prevent it from causing harm. These groups are well-intentioned and are doing worthwhile work.
However… Without more diverse geographic representation, they’ll produce a global vision for AI ethics that reflects the perspectives of people in only a few regions of the world, particularly North America and northwestern Europe. If unaddressed, they risk developing standards that are, at best, meaningless and ineffective across all the world’s regions. At worst, these flawed standards will lead to more AI systems and tools that perpetuate existing biases and are insensitive to local cultures."
To delve deeper, read the full article here.
The history of science and technology shows that seemingly innocuous developments in scientific theories and research have enabled real-world applications with significant negative consequences for humanity. In order to ensure that the science and technology of AI is developed in a humane manner, we must develop research publication norms that are informed by our growing understanding of AI's potential threats and use cases. Unfortunately, it's difficult to create a set of publication norms for responsible AI because the field of AI is currently fragmented in terms of how this technology is researched, developed, funded, etc. To examine this challenge and find solutions, the Montreal AI Ethics Institute (MAIEI) collaborated with the Partnership on AI in May 2020 to host two public consultation meetups. These meetups examined potential publication norms for responsible AI, with the goal of creating a clear set of recommendations and ways forward for publishers.
In its submission, MAIEI provides six initial recommendations, these include: 1) create tools to navigate publication decisions, 2) offer a page number extension, 3) develop a network of peers, 4) require broad impact statements, 5) require the publication of expected results, and 6) revamp the peer-review process. After considering potential concerns regarding these recommendations, including constraining innovation and creating a "black market" for AI research, MAIEI outlines three ways forward for publishers, these include: 1) state clearly and consistently the need for established norms, 2) coordinate and build trust as a community, and 3) change the approach.
To delve deeper, read the full report here.
From the archives:
Here’s an article from our blogs that we think is worth another look:
Politics of Adversarial Machine Learning by Kendra Albert, Jonathon Penney, Bruce Schneier and Ram Shankar Siva Kumar
A very timely paper that in the post-ClearviewAI world brings forth some important issues about how to think about the societal implications of the work done in the ML Security space. (If this is your first encounter with this term, please take a look through our learning community at the Montreal AI Ethics Institute to know more, we believe that this emergent area is of prime importance as ML systems become more widely deployed.) The paper takes the case study of facial recognition technology as model for reasoning about some of the challenges that we encounter when we harden ML systems against adversarial attacks. It provides an insightful reframing of the meaning of attacks against a system, moving away from the typical notions of cybersecurity wherein an attacker is an entity that compromises the confidentiality, integrity and availability of a system for some malicious purpose.
Examples of when that’s not the case include someone trying to learn whether there image was used in a dataset that the system was trained on, determining whether an opaque ML system has bias issues and protecting identities of protestors and other vulnerable populations who are fighting for their civil liberties and human rights in states where they might be persecuted if they are recognized. Drawing on lessons learned from the work done by civil society organizations and others to combat ethical, safety and inclusivity issues from the commercial spyware industry, the authors urge developers and vendors of ML systems to consider human rights by design principles and other recommendations when thinking about hardening their ML systems against adversarial attacks.
To delve deeper, read the full article here.
Guest contributions:
If you’ve got an informed opinion on the impact of AI on society, consider writing a guest post for our community — just send your pitch to support@montrealethics.ai. You can pitch us an idea before you write, or a completed draft.
Events:
As a part of our public competence building efforts, we host events frequently spanning different subjects as it relates to building responsible AI systems, you can see a complete list here: https://montrealethics.ai/meetup
MAIEI Consultation: Guidelines for Third Party Ethics Review
October 6, 11:45 AM - 1:15 PM ET (Online)
You can find all the details on the event page, please make sure to register as we have limited spots (because of the online hosting solution).
Signing off for this week, we look forward to it again in a week! If you enjoyed this and know someone else that can benefit from this newsletter, please share it with them!
If you have feedback for this newsletter or think there is an interesting piece of research, development or event that we missed, please feel free to email us at support@montrealethics.ai