AI Ethics Brief #61: AI in the Middle East, what's missing in teaching tech ethics today, and more ...
Did you know that TikTok was changing the shapes of people's faces?
Welcome to another edition of the Montreal AI Ethics Institute’s weekly AI Ethics Brief that will help you keep up with the fast-changing world of AI Ethics! Every week we summarize the best of AI Ethics research and reporting, along with some commentary. More about us at montrealethics.ai/about.
⏰ This week’s Brief is a ~13-minute read.
Support our work through Substack
💖 To keep our content free for everyone, we ask those who can, to support us: become a paying subscriber for as little as $5/month. If you’d prefer to make a one-time donation, visit our donation page. We use this Wikipedia-style tipping model to support the democratization of AI ethics literacy and to ensure we can continue to serve our community.
*NOTE: When you hit the subscribe button, you may end up on the Substack page where if you’re not already signed into Substack, you’ll be asked to enter your email address again. Please do so and you’ll be directed to the page where you can purchase the subscription.
This week’s overview:
✍️ What we’re thinking:
A Social and Environmental Certificate for AI Systems
What’s missing in the way Tech Ethics is taught currently?
🔬 Research summaries:
Artificial Intelligence and Inequality in the Middle East: The Political Economy of Inclusion
📰 Article summaries:
TikTok changed the shape of some people’s faces without asking
Can Schools Police What Students Say on Social Media?
Chinese AI lab challenges Google, OpenAI with a model of 1.75 trillion parameters
‘Care bots’ are on the rise and replacing human caregivers
But first, our call-to-action this week:
The Challenges of AI Ethics in Different National Contexts
The Montreal AI Ethics Institute is partnering with I2AI to host a discussion about the different challenges that national contexts can present when trying to implement AI ethics. For example, different cultural views, differing levels of AI literacy and differing levels of resources available to dedicate to such implementation.
📅 June 30th (Wednesday)
🕛 11 AM –12:30PM EST
✍️ What we’re thinking:
Founder’s Desk:
A Social and Environmental Certificate for AI Systems
As more countries realize the potential AI has to offer in terms of economic opportunities, large societal problems have also been lumped under the category of things that can be “solved” using AI. This is reflected in the national AI strategies of various countries where grandiose claims are made that if only we throw enough computation, data, and the pixie dust of AI on it, we will be able to solve, among other things, the climate crises that looms large over our heads.
To delve deeper, read the full article here.
Office Hours:
What’s missing in the way Tech Ethics is taught currently?
What’s missing in the way Tech Ethics is taught currently? Two experts in this field, Heather von Stackelberg and Mathew Mytka, shared their ideas and experiences on these and other vital issues.
To delve deeper, read the full article here.
🔬 Research summaries:
Artificial Intelligence and Inequality in the Middle East: The Political Economy of Inclusion
The Middle East and North Africa region could prove an exciting incubator for the positive use of AI, especially given how different countries have prioritized its development. However, the tendency towards economic growth over social service improvement seems to have deprived a generally youthful population of expressing their digital literacy skills. Regional peculiarities complicate this further, requiring a conducive environment to develop AI within the region becoming ever more apparent.
To delve deeper, read the full summary here.
📰 Article summaries:
TikTok changed the shape of some people’s faces without asking
What happened: In May this year, TikTok users reported that the in-app camera altered their images. On further investigation, they weren’t able to find a way to turn that off. A user, for example, said that her jawline was altered, changing her appearance to be more “feminine.” The author of the article found that the issue was isolated to Android phones, and when probing further with the company, they took down this change and said it was a temporary glitch.
Why it matters: Altering the appearance of someone without their consent and without giving them an option to opt-out is a violation of the trust and the implicit contract that users have with the app and its maker. Especially in the case of a large maker who has worldwide reach, this seems to be an imposition of patterns practiced in one part of the world (notably in China, where other apps apply beauty filters automatically) to the rest of the world.
Between the lines: Censorship can take many forms, and one might construe this to be a form of norms imposition originating in a particular culture. Transparency in the experiment from TikTok would have increased the trust that users have in them compared to the blowback they are now facing. Users found themselves stuck without options, and more importantly, without an explanation for what went down.
Can Schools Police What Students Say on Social Media?
What happened: A student faced repercussions from her school based on some remarks that she made on social media igniting an intense debate on what the boundaries of free speech should be and what role schools play in that. While there are many different viewpoints that have been offered, at the moment, as per reporters, the Supreme Court is siding with the student in support that the school overstepped its boundaries. The local ACLU chapter is fighting the case on behalf of the student while the legal team from the school has been silent in its comments.
Why it matters: It is well accepted that schools have authority on their premises to discipline students when they cross boundaries that can cause harm to staff and other students on campus. But, increasingly as the definition of campus extends into the digital world, the boundaries blur quite significantly. What students do online might remain outside the purview of the school authorities, but there is ample risk of “outside” online activity pouring back into activities on campus and this is what is causing the dilemma in determining appropriate boundaries in this case and others similar to it in the past.
Between the lines: This raises classical questions about the boundaries of free speech and how they may infringe on others’ rights. In particular, it is even more important in an age where the internet allows any voice to be heard and amplified quickly escalating arguments. The boundaries for what forms free speech and how it may or may not be policed have been studied for many decades and emerging literature in applying these ideas in the internet age will play a significant role in the future of online discourse.
Chinese AI lab challenges Google, OpenAI with a model of 1.75 trillion parameters
What happened: The Beijing Academy of AI (BAAI) has released a very large-scale multi-modal model called Wudao that is about 10% larger than the current largest model called SwitchTransformer from Google and 10x larger than the very famous GPT-3 from OpenAI. BAAI announced stellar results on many different benchmark tasks including examples of how it can beat DALL-E from OpenAI in generating images from a text description.
Why it matters: Multi-modal models that are able to operate on different kinds of inputs like images and text are becoming more and more important in AI and Wudao presents a giant leap forward. For example, when we look at some of the disinformation and hate speech on social media platforms today, they take the form of memes which are multi-modal in that they combine images and text. Automated content moderation for this kind of content has been challenging and such models could present us with viable options to address this challenge.
Between the lines: Unfortunately, there is still an adversarial dynamic between China and the US in terms of AI advances where developments are always touted as a race against each other. Increased collaboration will yield even better results, especially on the front of making sure that these systems are ethical, safe, and inclusive. What is also important to note here is the vastly different funding model that BAAI has compared to the privately-funded labs like OpenAI and DeepMind. It will definitely lead to different priorities in terms of what is researched and hence having greater collaboration between such entities can have a normalizing effect on the overall state of the ecosystem.
‘Care bots’ are on the rise and replacing human caregivers
What happened: When we imagine carebots, we think of friendly-faced robots zipping around a healthcare facility, but the truth is much more mundane: they are embedded in the software that is responsible for allocating care hours, monitoring health of patients, and directing staff in these facilities towards different patients based on needs that are assessed in an opaque fashion. The authors in the article argue that while these carebots are already here, the future need not be dystopic; we can still shape how they integrate with our healthcare facilities by being more deliberate about ensuring autonomy, dignity, and warmth in tending to humans at their most vulnerable.
Why it matters: Attention for algorithms deployed in a healthcare context tend to focus a lot on issues of bias, privacy, etc. that fall under the large umbrella of Responsible AI. We need to consider how they impact people’s perception of care and how comfortable and cared for patients feel in interacting with these systems. We also need to think about the impact that they have on the workflows of healthcare workers.
Between the lines: As companies rushed to bridge the care worker shortage in the wake of the pandemic, issues of privacy and bias were swept aside in the interest of expediency. We can’t let our guard down as we bring in these systems that are yet to prove their usefulness in a manner that is consistent with the values that we care for in a healthcare setting. We have to be vigilant and most of all deliberate in our integration of these systems in the existing healthcare system. Most of all, we need to involve domain experts, including the healthcare workers who will bear the brunt of the decisions made by these systems in addition to the patients themselves.
From elsewhere on the web:
Concordia lands $1.6M grant for artificial intelligence and software engineering research
Emad Shihab will lead the training program on the development and social aspects of AI systems. The Montreal AI Ethics Institute is proud to be a part of the consortium that will be working on this project.
To delve deeper, read the full press release here.
Putting AI ethics into practice | Abhishek Gupta
How do you move from discussing AI ethics in the abstract to putting them into practice? Abhishek Gupta, founder of the Montreal AI Ethics Institute and a Machine Learning Engineer at Microsoft, shares tools and best practices and encourages data scientists to share and learn from failures.
To delve deeper, listen to the podcast here.
The ethics of using AI in war
Our founder Abhishek Gupta presented at the Nonviolent International Southeast Asia Conference on Multisector Perspectives on A Treaty Banning Fully Autonomous Weapons Systems on June 14, 2021
To delve deeper, read a previously published article that highlights these ideas.
How do we get to more sustainable AI systems? An analysis and roadmap for the community
Our founder Abhishek Gupta presented at the Sustainable AI EU conference highlighting the current gaps in the research into assessing the environmental impacts of AI. He also presented a roadmap for the community to bridge those gaps.
To delve deeper, read this article that covers some ideas that were presented at the conference.
A Social and Environmental Certificate for AI Systems
As more countries realize the potential AI has to offer in terms of economic opportunities, large societal problems have also been lumped under the category of things that can be “solved” using AI. This is reflected in the national AI strategies of various countries where grandiose claims are made that if only we throw enough computation, data, and the pixie dust of AI on it, we will be able to solve, among other things, the climate crises that looms large over our heads.
To delve deeper, read the full article here.
In case you missed it:
The algorithmic imaginary: exploring the ordinary effects of Facebook algorithms
Bucher explores the spaces where humans and algorithms meet. Using Facebook as a case study, she examines the platform users’ thoughts and feelings about how the Facebook algorithm impacts them in their daily lives. She concludes that, despite not knowing exactly how the algorithm works, users imagine how it works. The algorithm, even if indirectly, not only produces emotions (often negative) but also alters online behaviour, thus exerting social power back onto the algorithm in a human-algorithm interaction feedback loop.
To delve deeper, read the full summary here.
Take Action:
Events:
The Challenges of AI Ethics in Different National Contexts
The Montreal AI Ethics Institute is partnering with I2AI to host a discussion about the different challenges that national contexts can present when trying to implement AI ethics. For example, different cultural views, differing levels of AI literacy and differing levels of resources available to dedicate to such implementation.
📅 June 30th (Wednesday)
🕛 11 AM –12:30PM EST