The AI Ethics Brief #56: Chief AI Ethics Officer, tracking in Apple products, tech and big oil, and more ...
Can we trust AI systems?
Welcome to another edition of the Montreal AI Ethics Institute’s weekly AI Ethics Brief that will help you keep up with the fast-changing world of AI Ethics! Every week we summarize the best of AI Ethics research and reporting, along with some commentary. More about us at montrealethics.ai/about.
⏰ This week’s Brief is a ~12-minute read.
Support our work through Substack
💖 To keep our content free for everyone, we ask those who can, to support us: become a paying subscriber for as little as $5/month. If you’d prefer to make a one-time donation, visit our donation page. We use this Wikipedia-style tipping model to support the democratization of AI ethics literacy and to ensure we can continue to serve our community.
*NOTE: When you hit the subscribe button, you may end up on the Substack page where if you’re not already signed into Substack, you’ll be asked to enter your email address again. Please do so and you’ll be directed to the page where you can purchase the subscription.
This week’s overview:
✍️ What we’re thinking:
The Chief AI Ethics Officer: A Champion or a PR Stunt?
🔬 Research summaries:
How Tech Companies are Helping Big Oil Profit from Climate Destruction
In AI We Trust: Ethics, Artificial Intelligence, and Reliability
📅 Event summaries:
10 takeaways from our meetup on AI Ethics in the APAC Region
📰 Article summaries:
To Be Tracked or Not? Apple Is Now Giving Us the Choice. (NY Times)
Facebook Oversight Board Upholds Social Network’s Ban of Trump (NY Times)
How China turned a prize-winning iPhone hack against the Uyghurs (MIT Tech Review)
The four most common fallacies about AI (VentureBeat)
But first, our call-to-action this week:
Register for The State of AI Ethics Panel (May 26th)
Now that we’re nearly halfway through 2021, what’s next for AI Ethics? Hear from a world-class panel, including:
Soraj Hongladarom — Professor of Philosophy and Director, Center for Science, Technology and Society at Chulalongkorn University in Bangkok (@Sonamsangbo)
Dr. Alexa Hagerty — Anthropologist, University of Cambridge’s Centre for the Study of Existential Risk (@anthroptimist)
Connor Leahy — Leader at EleutherAI (@NPCollapse)
Stella Biderman — Leader at EleutherAI (@BlancheMinerva)
Victoria Heath (Moderator) — Associate Director of Governance & Strategy, Montreal AI Ethics Institute (@victoria_heath7)
Abhishek Gupta - Founder, Montreal AI Ethics Institute (@atg_abhishek)
📅 May 26th (Wednesday)
🕛 12:00PM – 1:30PM EST
🎫 Get free tickets
✍️ What we’re thinking:
Business & AI Ethics:
The Chief AI Ethics Officer: A Champion or a PR Stunt? by Masa Sweidan
We have reached a point where the far-reaching impacts of AI’s ability to identify, prioritize and predict can be felt in virtually every industry. Over the last couple of years, both researchers and practitioners have established that the power relations embedded in these systems can deepen existing biases, affect access to reliable information and shape free speech. Many organizations have attempted to stay relevant and keep up with the developments in AI Ethics by introducing the role of a Chief AI Ethics Officer (CAIEO), which has other titles including AI Ethicist, Ethical AI Lead and Trust and Safety Policy Advisor, to name a few.
To delve deeper, read the full article here.
🔬 Research summaries:
How Tech Companies are Helping Big Oil Profit from Climate Destruction
The tech giants Amazon, Microsoft, and Google have each set ambitious targets for climate action, including the rapid adoption of renewable energy. Yet their contracts with oil and gas producers are absent in the accounting of company CO2 emissions, even though these projects often enable more fossil fuel extraction. The support that these tech companies provide through cloud computing infrastructure and data analytics could cause global increases in emissions and accelerate the pace of climate change.
To delve deeper, read the full summary here.
In AI We Trust: Ethics, Artificial Intelligence, and Reliability
The European Commission’s High-level Expert Group on AI (HLEG) has developed guidelines for a trustworthy AI, assuming that AI is something that has the capacity to be trusted. But should we make that assumption? Apparently no, according to this paper, where the author argues that AI is not the type of thing that has the capacity to be trustworthy or untrustworthy: the category of ‘trust’ simply does not apply to AI, so we should stop talking about ‘trustworthy AI’ altogether.
To delve deeper, read the full summary here.
📰 Article summaries:
To Be Tracked or Not? Apple Is Now Giving Us the Choice. (NY Times)
What happened: The latest iOS update now comes with a mandatory pop-up that apps need to enable that will ask users for their consent to be tracked by the app for advertising and other purposes. While individual settings allowed for that in some measures in different apps, this update makes it a universal requirement. Additionally, the change in UX also makes it more obvious and adds more agency for the users.
Why it matters: While developers have found ways to operate on the fringes of what is permissible in the hopes of continuing to suck up data about users to target them with ads and other revenue-generating activities, this new update forces them to face the consequences. It will also make privacy concerns a bit more centred in the UX and hopefully elevate the discussions further as GDPR had done in 2018.
Between the lines: This isn’t a panacea. There are other ways to track users like fingerprinting of user activities where a number of different behaviours are lumped together to create a unique identifier for a user that doesn’t rely on the unique device identifier to track the user. This update will also urge developers to potentially search for other means of tracking. It is an inherently adversarial game and privacy researchers and advocates always need to be on the lookout on how to combat breaches and subversion attempts by companies trying to make an extra buck using our data.
Facebook Oversight Board Upholds Social Network’s Ban of Trump (NY Times)
What happened: The Oversight Board shared its decision on the Trump ban advising Facebook that the indefinite suspension was inappropriate since it isn’t an action detailed in their policies. In a 6-month period, Facebook now needs to decide how they act on the recommendations of the Board that suggests they either provide a time-limited ban or issue a permanent ban that is in line with the standard punishment protocol on the platform.
Why it matters: This “judgement” will set a precedent for how high-stakes cases will be handled by large organizations. Specifically, Facebook’s response will be under greater scrutiny than other decisions made by them given the highly charged political implications of how they act and what precedents are created. It will also be watched closely for the amount of influence that the Oversight Board has on the decisions that are actually made by Facebook. It has been criticized in the past for not having enough power to compel the platform to act in alignment with its recommendations though in the past 4 out of the 5 decisions, Facebook went along with those recommendations.
Between the lines: The Oversight Board acts as a first step towards creating greater transparency and accountability in the operations of content moderation. However, there is a lot more to be done and I believe that we need to find ways to work together with the platforms to implement practices that will help us achieve our final goals of having a healthier information ecosystem.
How China turned a prize-winning iPhone hack against the Uyghurs (MIT Tech Review)
What happened: An annual hackers conference centred on zero-day exploits gave a window to the rest of the world on how cybersecurity vulnerabilities are discovered and subsequently exploited for various motives. In this case, as the Chinese government caught wind of the work being done by their citizens in external fora for monetary rewards, they created an internal equivalent of a popular competition called Pwn2Own as the Tianfu Cup that encouraged researchers and hackers to uncover vulnerabilities and get rewarded for them while keeping knowledge internal to China. At one such edition of the competition, a researcher named Qixun Zhao found a vulnerability that enabled breaking into new iPhones through an exploit for the Safari web browser on the device.
Why it matters: This hack while the fix was being generated by Apple allowed malicious actors to further inflict harm on Uighurs in China who are already subject to a lot of surveillance and oppression by the Chinese government. While Pwn2Own worked in close partnership with companies whose software vulnerabilities were discovered, allowing them an opportunity to address those issues. This new formulation that takes this work behind the scenes creates a black market for zero-day exploits that can significantly harm the overall state of cybersecurity in the world.
Between the lines: Human rights abuses facilitated by vulnerabilities in consumer technologies will unfortunately continue to become a more heavily utilized avenue until we achieve better global cooperation on managing these risks and sharing information across geographical boundaries. Technology doesn’t stop at the borders in a globalized world, especially when you have manufacturers like Apple whose devices are used throughout the world.
The four most common fallacies about AI (VentureBeat)
What happened: As we discuss more about the ethical implications of AI, we must also examine how we perceive the capabilities and thus the limitations of these systems. In this article, the author covers some work from Melanie Mitchell scrutinizing the various forms in which we interpret intelligence and how we project those ideas onto machines. Specifically, it examines biases that we have in anthropomorphizing the capabilities of an AI system, how we might be generalizing too soon from narrow AI capabilities, our disconnect between the role of the brain and the rest of the body in realizing intelligence, and inaccurate communication of scientific results.
Why it matters: A more accurate understanding of the actual capabilities of AI systems will be essential if we are to make meaningful regulations, policies, and other measures to address some of the ethical challenges in the use of AI systems. Specifically, if we misunderstand (under- or overestimate) the capabilities of AI systems, we might be trying to solve for the wrong problems and set forth erroneous precedents.
Between the lines: In my experience with the domain of AI ethics, as we’ve had more people pour into the domain, the lack of a shared, well-grounded, and scientifically oriented understanding of the true capabilities and limitations of current and near-term AI systems has led to a lot of people making recommendations that are ineffectual in the goals that they are trying to achieve, both because they are looking at the wrong problems to solve (because they might not be problems at all) or because they are looking at problems that may never materialize which they falsely believe to actually already exist today.
From our Living Dictionary:
Market fundamentalism is the mistaken belief that when left alone, markets always produce the greatest possible equity in both the economic and social sectors.
👇 Learn more about why it matters in AI Ethics via our Living Dictionary.
Event Summary:
10 takeaways from our meetup on AI Ethics in the APAC Region
This event recap was co-written by Connor Wright (our Partnerships Manager) and Shannon Egan (our QRM Intern) who co-hosted our “AI Ethics in the APAC Region” virtual meetup in partnership with Women in AI and the University of New South Wales (UNSW).
To delve deeper, read the full summary here.
From elsewhere on the web:
Ethics of AI: Benefits and risks of artificial intelligence
The increasing scale of AI -- in terms of the size of neural networks, their energy use, the size of data sets, and the prevalence of the technology in society -- is raising the stakes for major ethical questions. This article also talks about our reports and mentions that “issues of ethics cover a much wider spectrum than one might think. They include algorithmic injustice, discrimination, labor impacts, misinformation, privacy, and risk and security.”
To delve deeper, read the full article here.
In case you missed it:
Post-Mortem Privacy 2.0: Theory, Law and Technology
Debates surrounding internet privacy have focused mainly on the living, but what happens to our digital lives after we have passed? In this paper, Edina Harbinja offers a theoretical and doctrinal discussion of post-mortem privacy and makes a case for its legal recognition.
To delve deeper, read the full report here.
Take Action:
Events:
The Triangle of Trust in Conversational Ethics and Design: Where Bots, Language and AI Intersect
We’re partnering with Salesforce to host a discussion about conversational ethics and design.
Conversational AI enables people to communicate via text or voice with automated systems like smart speakers, virtual assistants, and chatbots. Leveraging Automatic Speech Recognition (ASR) and Natural Language Processing (NLP), these systems can recognize speech, understand context, remember previous dialogue, access external knowledge, and generate text or speech responses.
However, conversational AI may not work equally well for everyone, and may even cause harm due to known or unknown bias and toxicity. Additionally, generating “personalities” for bots or virtual assistants creates risks of appearing inauthentic, manipulative, or offensive. In this workshop, we will discuss the issues of bias, harm, and trust where bots, language, and AI intersect.
📅 June 10th (Thursday)
🕛 12:00PM – 1:30PM EST
🎫 Get free tickets
Register for The State of AI Ethics Panel (May 26th)
Now that we’re nearly halfway through 2021, what’s next for AI Ethics? Hear from a world-class panel, including:
Soraj Hongladarom — Professor of Philosophy and Director, Center for Science, Technology and Society at Chulalongkorn University in Bangkok (@Sonamsangbo)
Dr. Alexa Hagerty — Anthropologist, University of Cambridge’s Centre for the Study of Existential Risk (@anthroptimist)
Connor Leahy — Leader at EleutherAI (@NPCollapse)
Stella Biderman — Leader at EleutherAI(@BlancheMinerva)
Victoria Heath (Moderator) — Associate Director of Governance & Strategy, Montreal AI Ethics Institute (@victoria_heath7)
Abhishek Gupta - Founder, Montreal AI Ethics Institute (@atg_abhishek)
📅 May 26th (Wednesday)
🕛 12:00PM – 1:30PM EST
🎫 Get free tickets