AI Ethics #37: Civic competence in AI ethics, future of responsible AI in Africa, examining the black box, and more ...
What are some of the issues at the intersection of disability and bias in AI?
Welcome to another edition of the Montreal AI Ethics Institute’s weekly newsletter that will help you navigate the fast-changing world of AI Ethics! Every week we summarize the best of AI Ethics research and reporting, along with some commentary. More about us at montrealethics.ai/about.
Consider supporting our work through Substack
We’re trying a Guardian/Wikipedia-style tipping model: everyone will continue to get the same content but those who can pay for a subscription will support free access (to our newsletter & events) for everyone else. Each contribution, however big or small, helps ensure we can continue to produce quality research, content, and events.
NOTE: When you hit the subscribe button below, you will end up on the Substack page where if you’re not already signed into Substack, you’ll be asked to enter your email address again. Please do so and you’ll be directed to the page where you can purchase the subscription.
Support our work today with as little as $5/month
Photo by Alexis Brown on Unsplash
This week’s overview:
Research summaries:
Examining the Black Box: Tools for Assessing Algorithmic Systems
Disability, Bias, and AI
What we are thinking:
Why civic competence in AI ethics is needed in 2021
RAIN Africa and MAIEI on The Future of Responsible AI in Africa
Article summaries:
When AI Reads Medical Images: Regulating to Get It Right (Stanford HAI)
Google showed us the danger of letting corporations lead AI research (QZ)
Preparing for the Future of Work (Stanford HAI)
Privacy Considerations in Large Language Models (Google AI Blog)
Algorithms Behaving Badly: 2020 Edition (The Markup)
Facial Recognition Company Lied to School District About its Racist Tech (Vice)
But first, our call-to-action of the week:
As the year kicks off, we hope that you have the opportunity to reset and prioritize the most important goals for yourself in 2021. We are certainly doing the same at the Montreal AI Ethics Institute and greatly appreciate your support in engaging with our work. Please share this newsletter with others in your network so that we can come together and build a more ethical, safe, and inclusive future for AI.
Research summaries:
Examining the Black Box: Tools for Assessing Algorithmic Systems by Ada Lovelace Institute, Data Kind UK
The paper clarifies what assessment in algorithmic systems can look like, including” when assessment activities are carried out, who needs to be involved, the pieces being evaluated, and the maturity of the techniques. It also explains key terms used in the field and identifies the gaps in the current methods as they relate to the factors mentioned above.
To delve deeper, read the full summary here.
Disability, Bias, and AI by Meredith Whittaker, Meryl Alper, Cynthia L. Bennett, Sara Hendren, Liz Kaziunas, Mara Mills, Meredith Ringel Morris, Joy Rankin, Emily Rogers, Marcel Salas, Sarah Myers West
A comprehensive report on how people with disabilities are excluded from the design and development of AI systems. The authors situate the discussion within the context of existing research and provide concrete recommendations on how AI practitioners can do better.
To delve deeper, read the full summary here.
What we are thinking:
Why civic competence in AI ethics is needed in 2021 by Abhishek Gupta
2020 certainly was a year for the books from many different perspectives. We don’t need a reminder of all the things that went wrong. It felt like the field of AI ethics was in itself a microcosm for everything that was going around. Towards the end of the year, I felt that the injustices and trouble that Dr. Gebru endured as a part of her work at Google really added more fuel to our own mission at the Montreal AI Ethics Institute in our calls for building civic competence in AI ethics.
To delve deeper, read the full article here.
RAIN Africa and MAIEI on The Future of Responsible AI in Africa by Connor Wright and Falaah Arif Khan
To close out the year, MAIEI teamed up with RAIN Africa to host the “Future of Responsible AI in Africa” Workshop, which saw participation from people across law, policy, ethics and computer science. Key insights from the discussions have been summarized in this piece.
To delve deeper, read the full article here.
Article summaries:
When AI Reads Medical Images: Regulating to Get It Right (Stanford HAI)
By now it never comes as news to anyone that AI is going to transform healthcare. What people now need to be paying attention to is how these systems are going to impact us as they start to become more widely deployed. In particular, the role of the FDA will need to expand to address this since their current expertise revolves around drugs and hardware related to medical technology. In particular, akin to the idea of robustness in the field of machine learning security, the software needs to be able to reliably perform the task that it claims to do and signal to the medical practitioner in case it is about to fail or isn’t sure of the best course of action.
In terms of the standards that should be followed, what is quite clear is that from a medical perspective, they should be developed by medical practitioners rather than being forced in by software manufacturers who may not be aware of all the best practices in the field and might be skewed towards advocating for things that benefit their products and services. Premature enshrining of differing standards and definitions for what is effective will lead to problems in benchmarking different systems and reinforcing the normative core of the system.
Medicine typically has a 4-stage approval process that looks at feasibility, capability, effectiveness, and durability. This stages the release and testing process to arrive at something that is generally going to work well and minimizes the case that there might be harm.
When it comes to the notion of reliability it is much easier to make a system that is usually right rather than very rarely wrong. That difference might be quite crucial when it comes to medicine because there is a direct impact on human lives. A final suggestion from the authors is that we need commons on which we can evaluate and benchmark these systems and perhaps having a third-party entity do that determination would be the best approach in this case.
Google showed us the danger of letting corporations lead AI research (QZ)
An article that captures the dangers of having corporate-backed research pervading the advances that are made in the field. Specifically, with the recent firing of Dr. Timnit Gebru which has been a huge loss for the community, we have seen how even the guise of freedom in a corporation only extends so far. The article also talks about some of the limitations of the news around solving the protein folding problem by DeepMind which was rebuffed by some in the biological sciences as not being reproducible at the moment because of the lack of publishing the associated code, data, and details that would allow for independent verification, par for the course in the world of science.
One of the things as was highlighted in a recent paper titled the Gray Hoodie Project, was that there is a risk when a lot of the research coming out in the field is funded by corporate interests one way or another. In particular, when it comes to research in the field of AI ethics, this has even more serious implications as we saw with the case of Dr. Gebru.
There are also huge concerns about whether such research even when it doesn’t have explicit censorship might be suffering from a skewed perspective, whether implicit or explicit, that shapes the direction of the field in general because the funders want specific outcomes that benefit them. There can be counterarguments that organizations will strive to maintain their independence when receiving funds but one can only wonder to what extent that can be taken at face value. Perhaps we need deeper investigations into the track records of publishing in this field to analyze the impacts of this, though drawing conclusions without the presence of counterfactuals (which will be impossible to come by) means that we can draw loose correlations between these factors and not much more.
Preparing for the Future of Work (Stanford HAI)
Talking about how the digitization of the economy has enlarged the pie, the panelists at an event hosted by the Stanford HAI pointed out that the distribution of the benefits has rarely been equitable with most of the gains accruing to those who already hold the keys to capital and power.
One popular argument is that the transformation of labor has been slower than what sensationalist media would portray them to be. In addition, the role that the government will play in how AI gets deployed in the economy can’t be underestimated: take the case of China where some uses of AI have led to surveillance use cases but in other places like India, some bureaucratic processes might become easier allowing more people to access government services.
An important point around the role of the media in portraying more realistic scenarios and also providing positive and negative examples to help people make informed decisions will be essential. In addition, having people with deep knowledge of AI integrated within different government functions will also help. When talking about regulation, the tech companies have been pushing to have “lighter” regulation but it is not yet clear what that entails. On a geopolitical level, there is a huge decoupling of China and the rest of the world which will make it much harder to arrive at shared standards and principles. This is another place where the future of work might be jeopardized if we don’t take adequate action.
Privacy Considerations in Large Language Models (Google AI Blog)
The AI ecosystem has been abuzz with all the hype around large-scale language models, and for the most part rightly so because of the massive utility that they have to offer in their flexibility. But, what are the tradeoffs of using such large-scale data and models? In this work from researchers, they point to privacy breaches that can emerge from such scenarios when we use very large corpora to train systems that might inadvertently sweep up personally identifiable information (PII).
Prompting the model with specific pieces of data, the researchers were able to extract data that constitute private information. Though they obtained requisite permissions from those who were affected by this, the possibility still looms large that anyone with access to the resources and know-how can do so as well and might not be as scrupulous in their approach. They essentially did this through a technique called ‘membership inference attack’, which through looking at the degree of confidence that the model has on its predictions when given certain inputs, can be used to identify in which cases there was memorization and hence potential sources for leaking private information.
The primary contribution of this work is that the researchers are able to sift through millions of possible inputs and outputs to select those that will prompt the model to output memorized and potentially private information from its training data. The researchers also found that the larger the trained models, the more the likelihood of memorization and hence the potential for it to be leaked when presented with well-crafted inputs. Techniques like differential privacy which can be utilized off-the-shelf from the frameworks in the case of TensorFlow, PyTorch, and Jax by replacing the optimizers with differentially private ones is a way to go about protecting the privacy of the data that models are trained on. Though, the researchers point out that if there are sufficient occurrences of that information, even differential privacy cannot protect us from the leakage of information.
Algorithms Behaving Badly: 2020 Edition (The Markup)
As we have seen numerous times in this newsletter, incidents of racial biases in algorithmic systems are very problematic and unfortunately persistent. It is a situation that only reifies existing biases and stereotypes in society, quite against the layperson’s perception that numerical systems are value-agnostic, a notion referred to as math-washing. Quoting the example of how Black athletes are treated differently in the NFL when it comes to the impacts of concussions, the associated treatments, and compensation comes from their being classified as having lower cognitive function and hence having differential effects of concussions throughout their career. Such societal ills are unfortunately captured all too well in our algorithmic systems since they are mostly a reflection of the data that is used to codify human interactions in the real world.
While most of the other examples mentioned in the article are ones that we have covered in the past, one that particularly caught my attention was the one that talks about how Whole Foods tries to figure out stores where there might be unionization attempts by tracking a variety of factors like the local unemployment rate, number of complaints, etc. so that they might be quashed before causing the company too many problems.
The article does conclude on a positive note that talks about how when people are made aware of algorithmic systems and how they might be impacting them, they can take actions to counter some of the injustices that are being leveled against them. Note that this is only possible if people are aware that there are such injustices being perpetrated against them in the first place. This is one of the primary reasons that I think that the civic competence work being done at the Montreal AI Ethics Institute is so important because it raises awareness about where one might encounter such systems, and what signs to look for to determine if they might be facing disparate outcomes because of their group membership or identities.
Facial Recognition Company Lied to School District About its Racist Tech (Vice)
NIST, the US standards agency, has a benchmark that evaluates a lot of the facial recognition technology vendors for various factors including their rates of accuracy on different demographic compositions. This article shines a light on the claims made by a particular facial recognition technology vendor that duped schools into buying its technology with the promise of better safety and minimal risks of bias.
One of the leading scientists who worked on the benchmark for NIST pointed to the discrepancy between the claimed performance of the system and what was found on the benchmark test in their published findings. What remains unclear are the rates of false positives and what the actual prevention of crime is. If parents are sold on the promise that their children will be safer through the use of such intrusive technology, then they are also well within their rights to demand that the efficacy of the system be shared with them.
As if this wasn’t a problem enough, some of the parents lamented that the use of the Smart School funds towards the procurement of this technology to the exclusion of other tools and upgrades has largely been rendered useless because of COVID-19 lockdowns. Specifically, some other schools chose to use the money for things like improved connectivity and new laptops, which would have been tremendously useful during the lockdown. This also raises the larger question around how we must be quite deliberate about fund allocation and not chase down shiny new pieces of technology without having ample evidence that they work as they claim to be, especially in a context where we have populations who are more vulnerable than the rest.
From elsewhere on the web:
How to Make Artificial Intelligence More Democratic (Scientific American)
Our staff researcher Ryan Khurana wrote this piece for Scientific American, detailing a new type of learning model that uses far less data than conventional AIs, allowing researchers with limited resources to contribute.
Guest post:
If you’ve got an informed opinion on the impact of AI on society, consider writing a guest post for our community — just send your pitch to support@montrealethics.ai. You can pitch us an idea before you write, or a completed draft.
In case you missed it:
Bring the People Back In: Contesting Benchmark Machine Learning by Emily Denton, Alex Hanna, Razvan Amironesi, Andrew Smart, Hilary Nicole, Morgan Klaus Scheuerman
The biases present in machine learning datasets, which revealed themselves to favour white, cisgender, male and Western subjects, have received a considerable amount of scholarly attention. Denton et al. argue that the scientific community has failed to consider the histories, values, and norms that construct and pervade such datasets. The authors intend to create a research program, what they termed the genealogy of machine learning, that works to understand how and why such datasets are created. By turning our attention to data collection, and specifically the labour involved in dataset creation, we can “bring the people back in” the machine learning process. For Denton et al., understanding the labour embedded in the dataset will push researchers to critically reflect on the type and origin of the data they are using and thereby contest some of its applications.
To delve deeper, read the full summary here.
Take Action:
MAIEI Learning Community
Interested in discussing the ethical challenges of AI in addressing some of the biggest ethical challenges of AI to develop interdisciplinary solutions with thinkers from across the world?
Our AI Ethics consulting services
In today’s market, the make-or-break feature for organizations using AI is whether they embody the principles of morality and ethics.
We want to help you analyze your organization and/or product offerings for moral and ethical weaknesses and make sure your business model is airtight. By undergoing a thorough, multidisciplinary review of your AI tool, we will provide ethical, social, and technical feedback on your work and research, which will allow you to proactively address your own blindspots and maximize your potential before ever undergoing a third-party ethics review.
Consider supporting our work through Substack
We’re trying a Guardian/Wikipedia-style tipping model: everyone will continue to get the same content but those who can pay for a subscription will support free access (to our newsletter & events) for everyone else. Each contribution, however big or small, helps ensure we can continue to produce quality research, content, and events.
NOTE: When you hit the subscribe button below, you will end up on the Substack page where if you’re not already signed into Substack, you’ll be asked to enter your email address again. Please do so and you’ll be directed to the page where you can purchase the subscription.
Support our work today with as little as $5/month
Events:
As a part of our public competence building efforts, we host events frequently spanning different subjects as it relates to building responsible AI systems. We also share events from the broader AI ethics ecosystem.
If you know of some events that you think we should feature, please don’t hesitate in sending us an email at support@montrealethics.ai
Signing off for this week, we look forward to it again in a week! If you enjoyed this and know someone else that can benefit from this newsletter, please share it with them!
If someone has forwarded this to you and you want to get one delivered to you every week, you can subscribe to receive this newsletter by clicking below:
If you have feedback for this newsletter or think there’s something interesting we missed, email us at support@montrealethics.ai