The AI Ethics Brief #45: Being algorithm-aware, algorithmic audits, writing rules on facial recognition, and more ...
How should we teach tech ethics?
Welcome to another edition of the Montreal AI Ethics Institute’s weekly AI Ethics Brief that will help you keep up with the fast-changing world of AI Ethics! Every week we summarize the best of AI Ethics research and reporting, along with some commentary. More about us at montrealethics.ai/about.
Support our work through Substack
💖 To keep our content free for everyone, we ask those who can, to support us: become a paying subscriber for as little as $5/month. If you’d prefer to make a one-time donation, visit our donation page. We use this Wikipedia-style tipping model to support the democratization of AI ethics literacy and to ensure we can continue to serve our community.
*NOTE: When you hit the subscribe button, you may end up on the Substack page where if you’re not already signed into Substack, you’ll be asked to enter your email address again. Please do so and you’ll be directed to the page where you can purchase the subscription.
This week’s overview:
What we’re thinking:
From the Founder’s Desk: Becoming an upstander in AI Ethics
Office Hours: How Do We Teach Tech Ethics? How Should We?
Research summaries:
To Be or Not to Be Algorithm Aware: A Question of a New Digital Divide?
The Algorithm Audit: Scoring the Algorithms That Score Us
Article summaries:
Can Auditing eliminate bias from algorithms? (The Markup)
How Covid-19 overwhelmed a leading group fighting disinformation online (Rest of World)
Postmates Drivers Have Become Easy Prey for Scammers. And Drivers Say the Company’s Not Helping (The Markup)
How One State Managed to Actually Write Rules on Facial Recognition (NY Times)
3 Mayors on Their (Very Real) Challenge to Silicon Valley’s Dominance (OneZero)
AI researchers detail obstacles to data sharing in Africa (VentureBeat)
But first, our call-to-action this week:
Register for The State of AI Ethics Panel!
What's next for AI Ethics in 2021? And what is the broader historical significance of the mistreatment of Dr. Timnit Gebru by Google? Hear from a world-class panel, including:
Danielle Wood — Assistant Professor in the Program in Media Arts & Sciences, MIT (@space_enabled)
Katlyn M Turner — Research Scientist, MIT Media Lab (@katlynmturner)
Catherine D’Ignazio — Assistant Professor of Urban Science and Planning in the Department of Urban Studies and Planning, MIT (@kanarinka)
Victoria Heath (Moderator) — Associate Director of Governance & Strategy, Montreal AI Ethics Institute (@victoria_heath7)
Abhishek Gupta - Founder, Montreal AI Ethics Institute (@atg_abhishek)
📅 March 24th (Wednesday)
🕛12 PM - 1:30 PM EST
🎫 Get free tickets
✍️ What we’re thinking:
From the Founder’s Desk:
Becoming an upstander in AI Ethics by Abhishek Gupta
As we've seen the enormous upheaval in the field of AI ethics over the past 3 months, I think it behooves us to think a little deeply about the role all of us can play in making a meaningful, positive impact on the world. This idea of becoming an upstander in AI ethics is particularly powerful and I believe that in 2021, this is the right way to help create a healthier ecosystem for us all.
To delve deeper, read the full article here.
Office Hours:
How Do We Teach Tech Ethics? How Should We? by Marianna Ganapini
A key goal of ours at the Montreal AI Ethics Institute (MAIEI) is to build civic competence and understanding of the societal impacts of AI as epitomized in our mission “Democratizing AI Ethics Literacy”. An important challenge to this objective is to find effective ways and best practices to equip, empower, and engage diverse stakeholders in order to provide them with the necessary tools to become better digital citizens and agents who are able to raise pertinent issues in a well-informed manner.
To delve deeper, read the full summary here.
🔬 Research summaries:
To Be or Not to Be Algorithm Aware: A Question of a New Digital Divide? by Anne-Britt Gran, Peter Booth, Taina Bucher
Understanding how algorithms shape our experiences is arguably a prerequisite for an effective digital life. In this paper, Gran, Booth, and Bucher determine whether different degrees of algorithm awareness among internet users in Norway correspond to “a new reinforced digital divide.”
To delve deeper, read the full summary here.
The Algorithm Audit: Scoring the Algorithms That Score Us by Shea Brown, Jovana Davidovic, Ali Hasan
Is it right for an AI to decide who can get bail and who can’t? Or that it can approve or disapprove your loan application, or your job application? Should we trust an AI as we would trust a fellow human making decisions? Those are just a few examples of ethical questions that stem from the widespread application of algorithms in many decision-making processes and activities. This growth of AI in replacing humans has triggered an arms race to provide capable and efficient AI evaluations.
To delve deeper, read the full summary here.
📰 Article summaries:
Can Auditing eliminate bias from algorithms? (The Markup)
What happened: HireVue, the company that uses AI to assist with the hiring process, and one which has faced lots of scrutiny for having bias in their systems, hired Cathy O’Neil’s (author of Weapons of Math Destruction) company (ORCAA) to conduct an audit of their systems. The audit itself focused on a narrow use-case and HireVue framed in a light that shows them as not having done anything wrong vis-a-vis what they promised to offer.
Why it matters: Because there are no standards for what forms a good audit in this domain, audits can easily become instruments for ethics washing the solution that has been created by a company offering a veneer of legitimacy. The lack of transparency around audits at the moment in the domain is also quite problematic.
Between the lines: It is unfortunate that more details on what actually happened can’t be released as mentioned in the article that O’Neil declined to comment on the specifics of what transpired with HireVue. If audits don’t have transparency, then we just risk making them a tool for ethics washing solutions and walking away thinking we have done our part without really affecting meaningful change.
How Covid-19 overwhelmed a leading group fighting disinformation online (Rest of World)
What happened: Debunk EU, a firm based out of Lithuania was heralded by a lot of people as the next stage in effectively fighting against disinformation using their hybrid AI-human approach that was supposed to scale to counter the threat of Russian troll and bot armies. But, with the pandemic in 2020 even they found themselves overwhelmed exposing how much more work is still required.
Why it matters: What’s noteworthy about this situation is how it serves as a good sandbox to see the effects of Russian-fueled disinformation given the relatively small size of the country and its history in dealing with such campaigns over many years. Yet, the citizens found themselves sharing that disinformation limiting the efficacy of the efforts of Debunk EU which focused more on foreign actors rather than domestic actors spreading problematic information.
Between the lines: The disinformation problem is an inherently adversarial one and as I had highlighted here, there are many political and technical challenges to getting content moderation and platform governance right. AI-assisted efforts can help stem the flow of disinformation partially, but it is only the first line of defence in a long series of actions that need to be taken before we get to a healthier information ecosystem.
Postmates Drivers Have Become Easy Prey for Scammers. And Drivers Say the Company’s Not Helping (The Markup)
What happened: Postmates delivery agents are subject to precarious employment arrangements, as evidenced by the case highlighted in this article where an agent was scammed out of his weekly earnings by being tricked into revealing account information that allowed the scammer to drain the agent’s earnings for that week. Other phishing incidents are also mentioned in the article which showcases how crooks have tried to take advantage of vulnerable workers during COVID with the spike in the number of delivery requests.
Why it matters: Gig workers in such cases have highly volatile earnings and rely on that money to meet basic needs. When platform companies refuse to offer help and are opaque with their redressal process, it harms the trust that these workers have in the system, disempowering and discouraging them from participating and offering their services.
Between the lines: The platforms need to offer better cybersecurity education to the workers so that they can protect themselves from phishing attacks. Alerts that share common ways such attacks are mounted can also help workers. Finally, having high levels of transparency will also aid in building trust in the platform, especially because workers don’t have the means to pursue legal action in case they are defrauded.
How One State Managed to Actually Write Rules on Facial Recognition (NY Times)
What happened: A bill that will be enacted in July in Massachusetts will pave the way for having a meaningful ban on facial recognition technology to allow us to use it in a positive way while minimizing the harm that arises from its use, for example with racial biases. This balanced version came about due to significant efforts from Kate Crockford at the ACLU who was instrumental in helping the lawmakers understand the nuances behind the use of facial recognition technology both from the positive and negative sides.
Why it matters: While most existing efforts call for an outright ban (which is warranted in many situations), a balanced approach that puts in place mechanisms like the separation of obtention of a warrant and the execution of this technology to perform a search can help to build in accountability and mitigate harm from false matches and other problems that have plagued facial recognition technology.
Between the lines: What was particularly great to see here was the degree of influence that non-governmental organizations can have in crafting bills and regulations that can bridge the gaps that lawmakers might have in their knowledge on the real capabilities and limitations of these systems and how to propose ways forward that are meaningful and those that leverage the best that this technology can offer.
3 Mayors on Their (Very Real) Challenge to Silicon Valley’s Dominance (OneZero)
What happened: With the pandemic making remote work the norm, many tech workers have moved to cities outside of San Francisco to benefit from lower costs of living and to perhaps be closer to family. Mayors from Austin, Madison, and Miami talk about what the challenges facing San Francisco were (not just astronomical rents and dominance of tech firms) and how their cities have offered respite from some of those challenges, including what they are doing to attract people to these cities.
Why it matters: What was heartening to read here was the fact that there is so much diversity that is present in other cities as well and decentralizing technology development and innovation away from a few major hubs will not only help people get closer to problems that they are trying to solve, but also open their eyes up to different problems that tech can help solve rather than just doing so for the 1%.
Between the lines: The rise of virtual spaces like Clubhouse (audio) and Twitter Spaces will perhaps help to bring back some of the serendipity that arises in the collisions that happen in Silicon Valley that make it an attractive place to work in, but beware that some apps, especially Clubhouse have privacy concerns as covered here.
AI researchers detail obstacles to data sharing in Africa (VentureBeat)
What happened: A recently published paper at the FAccT conference talks about the challenges with the current paradigm of data collection and use in Africa that reeks of colonial practices. It details how there is still a strong degree of paternalism, and a lack of contextual understanding of the problems, and a lack of investment in building up local infrastructure that will create sustainable and relevant use of this technology in Africa.
Why it matters: While it is great that people are paying attention to utilizing data from the African continent and creating new solutions, doing so without leveraging the local expertise and without actually creating lasting infrastructure that is locally-owned will just reiterate and perpetuate colonialism. More importantly, it will also create solutions that are ill-suited to the needs of the people there.
Between the lines: Major companies have established data centers and other infrastructure locally in Africa to capitalize on this opportunity but supporting the local governments and companies so that they become capable of deploying such infrastructure on their own and maintain ownership will create more long-run sustainability for the continent.
From our Living Dictionary:
Definition of ‘Chatbots’
An automated system that engages in online dialogue with at least one other human, where the majority of its actions take place without the explicit involvement of a human agent. For example, SaaS companies (including Zoom) often use chatbots as a first line of support to help users troubleshoot problems.
👇 Learn more about why they matter in AI ethics through our Living dictionary.
Explore the Living Dictionary!
From elsewhere on the web:
AI for AI - Developing Artificial Intelligence for an Atmanirbhar India
A joint piece by Ameen Jauhar (Senior Resident Fellow at Vidhi) and our founder Abhishek Gupta, on why India needs more indigenous research on ethical & governance frameworks for the application of AI.
Tech Conference at Harvard Business School - March 6th
On March 6th, our founder Abhishek will be leading a breakout session on Tech & Social Justice from 2PM EST to 2:50PM EST. Other speakers at the conference include Reid Hoffman, Kevin O’Leary, Tristan Harris, and Peggy Johnson. Get tickets here.
☝️**If you face any challenges regarding the cost of the ticket, please email chair@techconferencehbs.com and they will get you a free ticket. No questions asked.
Guest post:
If you’ve got an informed opinion on the impact of AI on society, consider writing a guest post for our community — just send your pitch to support@montrealethics.ai. You can pitch us an idea before you write, or a completed draft.
In case you missed it:
The Toxic Potential of YouTube’s Feedback Loop by Guillaume Chaslot
This summary is based on a talk from the CADE Tech Policy Workshop: New Challenges for Regulation in late 2019. The speaker, Guillaume Chaslot, previously worked at YouTube and had first hand experience with the design of the algorithms driving the platform and its unintended negative consequences. In the talk he explores the incentive misalignment, the rise of extreme content, and some potential solutions.
To delve deeper, read the full summary here.
Take Action:
Events:
Register for The State of AI Ethics Panel!
What's next for AI Ethics in 2021? And what is the broader historical significance of the mistreatment of Dr. Timnit Gebru by Google? Hear from a world-class panel, including:
Danielle Wood — Assistant Professor in the Program in Media Arts & Sciences, MIT (@space_enabled)
Katlyn M Turner — Research Scientist, MIT Media Lab (@katlynmturner)
Catherine D’Ignazio — Assistant Professor of Urban Science and Planning in the Department of Urban Studies and Planning, MIT (@kanarinka)
Victoria Heath (Moderator) — Associate Director of Governance & Strategy, Montreal AI Ethics Institute (@victoria_heath7)
Abhishek Gupta - Founder, Montreal AI Ethics Institute (@atg_abhishek)
📅 March 24th (Wednesday)
🕛12 PM - 1:30 PM EST
🎫 Get free tickets
Nominate underrecognized people in AI ethics to be featured in our next report!
We are inviting the AI ethics community to nominate researchers, practitioners, advocates, and community members in the domain of AI ethics to be featured in our upcoming State of AI Ethics report.
There is often great work being done in different parts of the world that does not get the attention it deserves due to the state of our information ecosystem and the manner in which platforms surface content. We would like to break that mold and shed some light on the valuable work being done by talented people.