AI Ethics #35: Green algorithms, data statements for NLP, AI and the Global South, and more ...
Why people might not trust the AI systems you build and how you can fix that.
Welcome to another edition of the Montreal AI Ethics Institute’s weekly newsletter that will help you navigate the fast-changing world of AI Ethics! Every week we summarize the best of AI Ethics research and reporting, along with some commentary. More about us at montrealethics.ai/about.
Consider supporting our work through Substack
We’re trying a Guardian/Wikipedia-style tipping model: everyone will continue to get the same content but those who can pay for a subscription will support free access (to our newsletter & events) for everyone else. Each contribution, however big or small, helps ensure we can continue to produce quality research, content, and events.
NOTE: When you hit the subscribe button below, you will end up on the Substack page where if you’re not already signed into Substack, you’ll be asked to enter your email address again. Please do so and you’ll be directed to the page where you can purchase the subscription.
Support our work today with as little as $5/month
Photo by Noah Buscher on Unsplash
This week’s overview:
Research summaries:
Green Algorithms: Quantifying the Carbon Emissions of Computation
Owning Ethics: Corporate Logics, Silicon Valley, and the Institutionalization of Ethics
AI and the Global South: Designing for Other Worlds
Data Statements for Natural Language Processing: Toward Mitigating System Bias and Enabling Better Science
Guest Feature:
Worried But Hopeful: The MAIEI State of AI Ethics Panel Recaps a Difficult Year
Article summaries:
Root Out Bias at Every Stage of Your AI-Development Process (Harvard Business Review)
How The Department Of Defense Approaches Ethical AI (Forbes)
The coming war on the hidden algorithms that trap people in poverty (MIT Tech Review)
Can AI Fairly Decide Who Gets an Organ Transplant? (Harvard Business Review)
Why people may not trust your AI, and how to fix it (Microsoft)
The tech industry needs regulation for its systemically important companies (Brookings)
But first, our call-to-action of the week:
Let us know how we can improve next year!
We want to know how we can serve you better in 2021, especially with these weekly newsletters and our quarterly State of AI Ethics Reports! It’s a short survey, so we hope you’ll contribute a few minutes of your time in service of improving the experience for the whole community. Looking forward to implementing some of these next year!
Click here to take the 2-minute survey.
Research summaries:
Green Algorithms: Quantifying the Carbon Emissions of Computation by Loice Lannelongue, Jason Grealey, Michael Inouye
This paper introduces the methodological framework behind the Green Algorithm, a free online tool that can provide a standard and reliable estimate of any computational task’s carbon emissions, providing researchers and industries with a sense of their environmental impact.
To delve deeper, read our full summary here.
Owning Ethics: Corporate Logics, Silicon Valley, and the Institutionalization of Ethics by Jacob Metcalf, Emmanuel Moss, and danah boyd
This paper outlines the role of “ethics owners,” a new occupational group in the tech industry, whose jobs are to examine ethical consequences of technological innovations. The authors highlight competing logics that they have to navigate, and two ethical pitfalls that might result from those different imperatives.
To delve deeper, read our full summary here.
AI and the Global South: Designing for Other Worlds by Chinmayi Arun
This paper explores the unique harms faced by the “Global South” from artificial intelligence (AI) through four different instances, and examines the application of international human rights law to mitigate those harms. The author advocates for an approach that is “human rights-centric, inclusive” and “context-driven.”
To delve deeper, read our full summary here.
Data Statements for Natural Language Processing: Toward Mitigating System Bias and Enabling Better Science by Emily M. Bender and Batya Friedman
This paper provides a new methodological instrument to give people using a dataset a better idea about the generalizability of a dataset, assumptions behind it, what biases it might have, and implications from its use in deployment. It also details some of the accompanying changes required in the field writ large to enable this to function effectively.
To delve deeper, read our full summary here.
Article summaries:
Root Out Bias at Every Stage of Your AI-Development Process (Harvard Business Review)
This article lays out a fairly clear mandate in terms of the importance of having responsibility being borne by the manufacturers of AI systems rather than those who are using them. In a call to leaders to pay more attention to the AI lifecycle, whether or not they have technical expertise, is an important consideration that will become a critical part of any leadership role in the future. In particular, this might call for leaders with a non-technical background to at least engage in equipping themselves with a fundamental understanding of how AI systems work but more importantly the different stages of the AI lifecycle.
The author in this article does a coarse-grained analysis of the lifecycle, splitting the stages into pre-, in-, and post-processing. This is a useful distinction, at least at the level where we might expect some executives to operate. It helps to make the more detailed components of the MLOps process more accessible.
While the steps highlighted in the article are typical of what you would expect in terms of due diligence when trying to address bias like having multiple annotations from a diversity of human labellers to prevent biases from creeping in, model monitoring, etc. what did stand out was the emphasis on seeing this as an ongoing process that doesn’t stop after you’ve deployed the system. It is essential that periodic checks be run to generate assurances that the system is still operating within expected boundaries.
How The Department Of Defense Approaches Ethical AI (Forbes)
(Full disclosure: Alka Patel was a summer research intern at the Montreal AI Ethics Institute in 2019)
Certainly an organization within the US government that has significant resources to shape how AI might be deployed and used in the wider industry, as large research budgets from them in the past have led to pivotal technological developments, this is great to see the DoD and the JAIC taking an active approach to formalizing their AI ethics principles and discussing them in an article to shed some light on the process behind it.
One of the things that other public institutions could borrow from this is the mindset that was adopted by the DoD in creating and publishing these principles in the interest of helping nations around the world navigate these issues better. Given the sometimes negative connotations that their work receives (though this is not necessarily the case as we covered before), having these principles articulated helps attract and retain talent that can allow these departments to pursue AI projects in the first place.
Some of the actions suggested in the article that can help organizations better actuate AI ethics principles include thinking of ethics as an enabler rather than an inhibitor in the sense that it actually makes systems more robust and useful. Taking a lifecycle approach (as we just discussed) that integrates smoothly with the rest of the organizational practices and increasing literacy around this subject across the organization can help tremendously. To lower the barrier of adoption, from a literacy point of view, we are not seeking to create subject matter experts. Instead we are looking to create stewards who can apply the tools and techniques.
The coming war on the hidden algorithms that trap people in poverty (MIT Tech Review)
Detailing the harrowing journey of an individual who fell under “coerced debt”, something that happens when someone close to the victim or a family member perpetrates financial abuse due to their intimate knowledge of the individual’s private data, this article showcases how in the era of algorithmic decision making, repercussions extend beyond just the immediate domain where such fraud might be perpetrated. Given that there is widespread data sharing across agencies and how credit ratings that are calculated in an opaque manner can be used in many downstream tasks, getting hit in one part affects all parts of our lives. Unfortunately, the impacts of these also have implications for the lawyers who are trying to help people who face the brunt of this.
Talking to a few lawyers defending victims, the article talks about how since 2014 the prevalence of this has gone up and lawyers are playing catch up in trying to gain a better understanding of how these algorithms work and how they can fight against the organizations, sometimes government agencies that are using these systems. In a case in Michigan, thousands of people were flagged for fraud and were denied access to government-provided unemployment services. In another case, someone was offered fewer hours of support as they got sicker, the opposite of what one would expect. When faced in court, the nurses didn’t have an answer as to how the decisions were being made. The reason being that they didn’t have an insight into what was being fed into the system, and how it was weighing the factors. They are after all medical experts and not computer scientists and shouldn’t have to bear the burden to scrutinize the system.
Finally, the procurement process is also quite opaque which exacerbates the problem. The push for adopting such systems, especially in the pandemic with reduced availability of staff and increased strains and demands for services, is obvious. But, kicking humans out of the loop and hoping that the systems can perfectly model the needs of people is fallacious at best. Lawyers are banding together to better educate each other so that they are better equipped to aid their clients and bring about justice in the face of abstract algorithms pushing back on basic rights of people.
Can AI Fairly Decide Who Gets an Organ Transplant? (Harvard Business Review)
Healthcare is one of those domains where the ethical implications of AI have very high relevance and significance. Yet, as articulated in this article from our staff researcher Connor, there are many challenges that are yet to be solved and thoughtfulness and domain experts consultations are essential in coming up with solutions that meet both moral and legal standards while also serving the business interests of healthcare institutions. And yes, the last part is important because in the reality of deployment of these systems, ignoring those considerations leads to failures with people bypassing the controls that are put in place because they have unrealistic expectations.
The article mentions some of the incompatible definitions in the world of fairness in machine learning (for a comprehensive explainer, see this summary from our founder Abhishek) and some of the subsequent challenges in operationalizing them. In particular, an ex-post analysis creates unnecessary risks in terms of the harms that might already be inflicted on people when incorrect decisions and refinements are made. In addition, we also stand the risk of killing further integration of technology in the healthcare system when an ex-post analysis reveals severe problems. Instead, adopting a proactive approach whereby we are able to analyze and articulate the ethical considerations upfront, balancing them with the business considerations will actually lead to systems that can then be tweaked in an iterative manner without receiving blanket rejection because of unfair outcomes.
Why people may not trust your AI, and how to fix it (Microsoft)
(Full disclosure: Our founder Abhishek Gupta works at Microsoft. However, the inclusion of this article in the newsletter is unrelated to his employment and not paid for or endorsed by Microsoft)
We have more and more “intelligent” systems being deployed in all aspects of our lives. Yet, it seems to be that people are inconsistent in the trust that they place in these systems. This article provides some actionable insights into what might be going on and how we can ensure that products and services be designed in a more human-centred way.
In opening, the article talks about how humans have a tendency to choose another human even when they are shown to be wrong repeatedly, something that might come as a surprise to those designing interfaces for AI systems in the hopes of facilitating a transition away from humans. Borrowing from design principles, the article talks about empathy as a core facet of getting the design of the system right - essentially making sure that you are able to walk a mile in the shoes of the user to understand how they might experience the service rather than point of view of the engineers who are building it.
Working with real-world data will help to bring to the forefront cases that you can’t always simulate with limited training data. Keeping the user central to the experience in a way that gives them agency and control is essential in the success of the system’s adoption vs. the approach of limiting the perception of control that the user has.
Building on established design methodologies, capturing data about what is working and what isn’t is also deemed to be important. In addition, keeping in mind the feelings that are evoked in the user during their experience and making sure that they align with the purpose of the application, for example, building a ladder of trust in the case we want users to feel comfortable is another essential idea in successful AI systems that interact with humans. Finally, in collecting data about how the user is experiencing the system, make sure to account for researcher pleasing, the notion that users might not truly convey their feelings and experience in an attempt to align with what they think is expected of them. The principles outlined in the article are essential reading for people building and designing AI systems that are going to have a significant impact on humans.
The tech industry needs regulation for its systemically important companies (Brookings)
The 2008 financial crisis brought the term “too big to fail” in the minds of the public consciousness and rightly so when the maloperations in an industry decimated the global economy because of the interlinkages and systemic importance of the financial system. Some argue now that the same might be the case with our digital infrastructure as well which has become just as crucial as public utilities.
Precipitated by incidents like the hacking of prominent Twitter accounts in recent months that led to a small pandemonium in the cybersecurity world, researchers and activists advocate adopting a systemically important designation for technology companies that should be held to higher standards when it comes to their cybersecurity and resilience to failure since they are enmeshed centrally within the rest of the operations of the global economy. We already have that with telecommunications as an example that form the underpinnings of successful operations for a lot of the other utilities. Bringing technology companies under a similar fold with the understanding that they are systemically important will offer protections that will bolster the resilience of other fields as well.
By doing so, we will develop the mindset that regulation is essential to these companies rather than it being an afterthought. Certainly, something to think about when the social media companies as an example keep pushing for self-regulation but this approach makes regulation itself an essential part of their existence in which case pushing against would make for a much weaker case and provide much-needed clarity to the industry in how to move forward.
From elsewhere on the web:
Workshop on ‘The Philosophy and Law of Information Regulation in India’ | December 11 and 12, 2020
Our founder Abhishek Gupta presented on ‘The Privacy Conundrum: An empirical examination of barriers to privacy among Indian social media users’’.
Resistance AI Workshop @ NeurIPS 2020 - Accepted Papers and Media
Check out the full list, including our founder Abhishek Gupta & artist-in-residence Falaah Arif Khan’s work on Decoded Reality, a creative exploration of the Power dynamics that shape the Design, Development, and Deployment of Machine Learning and Data-driven Systems.
Our founder Abhishek Gupta was one of the co-organizers of this workshop about reflecting on machine learning research.
Guest post:
Worried But Hopeful: The MAIEI State of AI Ethics Panel Recaps a Difficult Year by Monika Viktorova
If there was a theme to close out a fraught year, the MAIEI State of AI Ethics panel captured it: worry. 2020 saw us grappling with the growing challenges of un- and under-regulated tech. The unabated spread of algorithmically-disseminated misinformation hobbled effective public health messaging responses to the global COVID-19 pandemic, likely increasing the transmission and death toll from the virus. The continued collapse of the media ecosystem, driven by consolidation of ad revenue from tech giants like Facebook, Google and Apple, left 52% of Trump voters believing erroneously that President Trump won the election. Proliferation of facial recognition technology use in policing, municipal surveillance, and travel prompted backlash worldwide.
To delve deeper, read the full piece here.
If you’ve got an informed opinion on the impact of AI on society, consider writing a guest post for our community — just send your pitch to support@montrealethics.ai. You can pitch us an idea before you write, or a completed draft.
In case you missed it:
The State of AI Ethics Report (Oct 2020)
This report captures the most relevant developments in AI Ethics since July of 2020. Our goal is to save you time by quickly getting you up to speed on what happened in the past quarter, by distilling the top research and reporting in the domain.
To delve deeper, read the full piece here.
Take Action:
Donate to The Markup
“So, in total, we spent about $57,000 on privacy-protecting engineering efforts in 2020. That is a rough guess because we didn’t log our hours at the time, and it is definitely an understatement because it doesn’t count the amount of time we spend thinking and talking about the privacy implications of everything we do.
It’s a lot of work, but we think it’s worth it—not just to protect our readers but also to show other websites that it can be done. We believe that our mission is not only to expose problems but also to find solutions—by building the world we want to live in, one tool at a time.
If you support The Markup’s dedication to building privacy-protecting tools, we hope you will consider giving before the end of the year at themarkup.org/donate.”
Read more about the tools they built to protect their readers’ privacy
MAIEI Learning Community
Interested in discussing the ethical challenges of AI in addressing some of the biggest ethical challenges of AI to develop interdisciplinary solutions with thinkers from across the world?
Our AI Ethics consulting services
In today’s market, the make-or-break feature for organizations using AI is whether they embody the principles of morality and ethics.
We want to help you analyze your organization and/or product offerings for moral and ethical weaknesses and make sure your business model is airtight. By undergoing a thorough, multidisciplinary review of your AI tool, we will provide ethical, social, and technical feedback on your work and research, which will allow you to proactively address your own blindspots and maximize your potential before ever undergoing a third-party ethics review.
Consider supporting our work through Substack
We’re trying a Guardian/Wikipedia-style tipping model: everyone will continue to get the same content but those who can pay for a subscription will support free access (to our newsletter & events) for everyone else. Each contribution, however big or small, helps ensure we can continue to produce quality research, content, and events.
NOTE: When you hit the subscribe button below, you will end up on the Substack page where if you’re not already signed into Substack, you’ll be asked to enter your email address again. Please do so and you’ll be directed to the page where you can purchase the subscription.
Support our work today with as little as $5/month
Events:
As a part of our public competence building efforts, we host events frequently spanning different subjects as it relates to building responsible AI systems. We also share events from the broader AI ethics ecosystem.
If you know of some events that you think we should feature, please don’t hesitate in sending us an email at support@montrealethics.ai
Signing off for this week, we look forward to it again in a week! If you enjoyed this and know someone else that can benefit from this newsletter, please share it with them!
If someone has forwarded this to you and you want to get one delivered to you every week, you can subscribe to receive this newsletter by clicking below:
If you have feedback for this newsletter or think there’s something interesting we missed, email us at support@montrealethics.ai