AI Ethics Brief #76: Fusing art and engineering, renewal of social theory, the IBM case study, and more ...
Why should you hire a Chief AI Ethics Officer?
Welcome to another edition of the Montreal AI Ethics Institute’s weekly AI Ethics Brief that will help you keep up with the fast-changing world of AI Ethics! Every week we summarize the best of AI Ethics research and reporting, along with some commentary. More about us at montrealethics.ai/about.
⏰ This week’s Brief is a ~16-minute read.
Support our work through Substack
💖 To keep our content free for everyone, we ask those who can, to support us: become a paying subscriber for as little as $5/month. If you’d prefer to make a one-time donation, visit our donation page. We use this Wikipedia-style tipping model to support the democratization of AI ethics literacy and to ensure we can continue to serve our community.
*NOTE: When you hit the subscribe button, you may end up on the Substack page where if you’re not already signed into Substack, you’ll be asked to enter your email address again. Please do so and you’ll be directed to the page where you can purchase the subscription.
This week’s overview:
✍️ What we’re thinking:
Fusing Art and Engineering for a more Humane Tech Future
🔬 Research summaries:
Implications of the use of artificial intelligence in public governance: A systematic literature review and a research agenda
Digital transformation and the renewal of social theory: Unpacking the new fraudulent myths and misplaced metaphors
Responsible Use of Technology: The IBM Case Study
📰 Article summaries:
The pandemic is testing the limits of face recognition
Why you should hire a chief AI ethics officer
There's a Multibillion-Dollar Market for Your Phone's Location Data
We need concrete protections from artificial intelligence threatening human rights
📖 Living Dictionary:
Supervised Learning
💡 ICYMI
Owning Ethics: Corporate Logics, Silicon Valley, and the Institutionalization of Ethics
But first, our call-to-action this week:
Follow us on Twitter and LinkedIn if you haven’t yet - we publish findings from our community engagements, our research work, news on our upcoming programs, funding opportunities, and more!
✍️ What we’re thinking:
Fusing Art and Engineering for a more Humane Tech Future
Interview with Domhnaill Hernon, Global Lead of Cognitive Human Enterprise at EY and former Head of Experiments in Arts and Technology (E.A.T.) at Nokia Bell Labs
Marshall McLuhan believed that artists were the best probes into the future of technology because they lived on the frontiers. They were the most likely to take technology in directions beyond the intentions of the scientists and engineers. But according to Domhnaill Hernon, artists don’t just think outside the box in terms of features and applications, their most important contribution to tech innovation is the ability to create a much needed human-centric vision of the future.
To delve deeper, read the full article here.
🔬 Research summaries:
The expanding use of Artificial Intelligence (AI) in public governance worldwide, has not only opened up new opportunities, but has also created challenges. This paper makes a systematic review of existing literature on the implications of the use of AI in public governance, and thereafter, develops a research agenda.
To delve deeper, read the full summary here.
With the emergence of technology, society has changed immeasurably. Questioning the status quo has become less of a pressing issue in favour of continuing to use a digital service. However, reflection is one of the most critical skills in preventing a digital future guided and dominated by the few.
To delve deeper, read the full summary here.
Responsible Use of Technology: The IBM Case Study
In recent summaries, we have stressed the fact that at times private companies have taken the lead in providing guidelines for the responsible use and development of AI technologies. The World Economic Forum and the Markkula Center for Applied Ethics at Santa Clara University are collaborating in surveying the work of these companies, and they recently have focused on IBM. In this summary, we will go through the main points of their most recent white paper, discussing the importance and novelty of the approach taken by IBM.
To delve deeper, read the full summary here.
📰 Article summaries:
The pandemic is testing the limits of face recognition
What happened: The article dives into what happens when we have larger portions of our society’s core operating infrastructure become automated, often run by private companies. Facial recognition technology has already been shown to be deeply flawed, but the article documents the case of a transgender person who was put in a precarious financial position as the California government put in place facial recognition technology to verify identities and the change in appearance for this person wasn’t correctly picked up by the system. In most places, automation like facial recognition technology is deployed as a force multiplier allowing low-resourced governments to provide more personalized services to a larger number of people. In other places, they are pitched as a health and safety option by providing contactless alternatives.
Why it matters: But, without an underlying supporting infrastructure composed of humans, such technology, when imperfect, and layered on top of an unjust society, can exacerbate injustices in society, making it particularly difficult for those who are already marginalized. When such technology for example works in 95% cases, the 5% who are left out need human intervention to still be able to access services. But, the current wave of automation often reduces human support down to the point where the 5% get permanently locked out of being able to access services and the help they need.
Between the lines: When thinking about deploying automation, design considerations are essential if they’re going to achieve lofty goals of increasing access for everyone and improving the quality of service. While the technology might work in a large percentage of cases, those who are unable to be served by the technology, often those who were previously marginalized too, need to be provided alternatives that still meet their needs. Without that, we only risk making society worse than it is by promoting automation as a way forward when it might be one step forward and two steps backwards in reality.
Why you should hire a chief AI ethics officer
What happened: As we have more organizations moving from principles to practice, this article gives a quick overview of the Chief AI Ethics Officer role including what it should include in its purview and how it can accelerate adoption of AI ethics within organizations. The role should drive the broad definition of ethics principles for the organization, define suitable properties for the AI systems, and drive tooling and processes within the organization for practitioners. A wide range of expertise is also required to take on this role including a multi-disciplinary background, the ability to effectively communicate with a diversity of stakeholders, driving company-wide engagement, and helping to make the business case for AI ethics as a core consideration.
Why it matters: In a recently published article, we highlighted what it would take for a Chief AI Ethics Officer to succeed within the organization and why it is an important role. In particular, the current problem with the move from principles to practice is that such efforts are often ad-hoc or don’t have enough executive support to really drive meaningful change within the organization. The appointment of such a position helps to overcome some of these challenges.
Between the lines: But, the mere appointment of such a position to meet public appearance requirements will only cause more harm in the long run. What needs to be carefully considered is how much actual power is allocated to this person and whether they have the necessary background and skills to be able to drive change across the organization. One of the most problematic issues at the moment is a lack of sufficient technical and operational expertise in such roles that leads to a breakdown of strategy when it comes time to actually put these ideas into practice.
There's a Multibillion-Dollar Market for Your Phone's Location Data
What happened: We all have tons of apps on our phones and for those of us who are more privacy-minded, we turn off location services. But, there exists a massive market for one’s location data, and often some apps not only don’t have the option to disable collecting location data, there are some who collect that data surreptitiously. And this market for location data is worth billions of dollars with many players who trade in billions of data points on millions of individuals selling that data for pennies. Often these are companies that we don’t hear about very often. Data brokers are required to register in Vermont and California, but in many other places, they operate under different guises.
Why it matters: There have been many past cases where people have been identified for their sexual identity or religious beliefs based on location data that was obtained illicitly from such data brokers. The data brokers operate in a shadowy world, buying and selling from any and all sources, trading information to continue building richer datasets that unlock more information about each person. This happens through something called the mosaic effect where disparate pieces of information can be combined to fill in the blanks about our lives and make inferences about our identities and behaviours.
Between the lines: In a paper published in 2018, the role of data brokers was pointed out as being even more insidious in a world where biometric data about us also becomes more widely available through systems for DNA tests, facial recognition technology, and others. Without robust guarantees for the security of that data (which is always a challenge!), and without more stringent measures on how data brokers operate, we will continue to exacerbate risks for people in our society whose data can be weaponized against them. And this is a bigger problem in regimes where there are even fewer civil rights protections.
We need concrete protections from artificial intelligence threatening human rights
What happened: The article makes a succinct case for how human rights based approaches might be better in getting more robust adoption of responsible AI rather than relying on ethics principles alone. First, it argues that since ethics are grounded in values, there is an indirect path to their enforcement. Second, since ethics depend on values and values can differ significantly, the enforcement becomes even harder as there is a lack of consensus and unified framework. Finally, given that human rights have precedents established in law already and there is some degree of universal agreement on them, designing more just AI systems might benefit more from following this path instead.
Why it matters: The friction between wanting responsible AI systems and actually having them in practice has been an ongoing theme. What we lack currently is the absence of concrete enough regulation that can enforce all these ideas of privacy-by-design, ethics-by-design, and many other X-by-design framings that are present in the domain of AI ethics.
Between the lines: A human-rights based approach has been proposed many times before but where it runs into trouble as well is lacking the connection with more concrete practices which can translate those ideas into better designed technologies. Whatever approach we choose to take, we need to make sure that practitioners are consulted and made an integral part of the process, otherwise the solutions proposed will fall flat when it comes time to implement them.
From our Living Dictionary:
‘Supervised Learning’
👇 Learn more about why it matters in AI Ethics via our Living Dictionary.
In case you missed it:
Owning Ethics: Corporate Logics, Silicon Valley, and the Institutionalization of Ethics
This paper outlines the role of “ethics owners,” a new occupational group in the tech industry, whose jobs are to examine ethical consequences of technological innovations. The authors highlight competing logics that they have to navigate, and two ethical pitfalls that might result from those different imperatives.
To delve deeper, read the full summary here.
Take Action:
Follow us on Twitter and LinkedIn if you haven’t yet - we publish findings from our community engagements, our research work, news on our upcoming programs, funding opportunities, and more!