The AI Ethics Brief #48: Privacy Paradox, artifact affordances, the AI gambit, and more ...
Are we ready for robot art exhibitions? What does that even mean?
đ Welcome to another edition of the Montreal AI Ethics Instituteâs weekly AI Ethics Brief that will help you keep up with the fast-changing world of AI Ethics! Every week we summarize the best of AI Ethics research and reporting, along with some commentary. More about us at montrealethics.ai/about.
â° This weekâs Brief is a ~12-minute read.
Support our work through Substack
đ To keep our content free for everyone, we ask those who can, to support us: become a paying subscriber for as little as $5/month. If youâd prefer to make a one-time donation, visit our donation page. We use this Wikipedia-style tipping model to support the democratization of AI ethics literacy and to ensure we can continue to serve our community.
*NOTE: When you hit the subscribe button, you may end up on the Substack page where if youâre not already signed into Substack, youâll be asked to enter your email address again. Please do so and youâll be directed to the page where you can purchase the subscription.
This weekâs overview:
âď¸ What weâre thinking:
How Artifacts Afford: The Power and Politics of Everyday Things
The importance of goal setting in product development to achieve Responsible AI
đŹ Research summaries:
Artificial Intelligence and the Privacy Paradox of Opportunity, Big Data and The Digital Universe
The AI Gambit â Leveraging Artificial Intelligence to Combat Climate Change
đ° Article summaries:
Myanmar: Facial Recognition System Threatens Rights (Human Rights Watch)
Googleâs FLoC Is a Terrible Idea (Electronic Frontier Foundation)
A âsplinternetâ wonât solve global cyber defense problems (C4ISRNET)
Robot artist Ai-da ready for her first art exhibition (TRT World)
But first, our call-to-action this week:
Tomorrow, attend The State of AI Ethics Panel at 12PM!
What's next for AI Ethics in 2021? And what is the broader historical significance of the mistreatment of Dr. Timnit Gebru by Google? Hear from a world-class panel, including:
Danielle Wood â Assistant Professor in the Program in Media Arts & Sciences, MIT (@space_enabled)
Katlyn M Turner â Research Scientist, MIT Media Lab (@katlynmturner)
Catherine DâIgnazio â Assistant Professor of Urban Science and Planning in the Department of Urban Studies and Planning, MIT (@kanarinka)
Victoria Heath (Moderator) â Associate Director of Governance & Strategy, Montreal AI Ethics Institute (@victoria_heath7)
Abhishek Gupta - Founder, Montreal AI Ethics Institute (@atg_abhishek)
đ
March 24th (Wednesday)
đ12 PM - 1:30 PM EST
đŤ Get free tickets
âď¸ What weâre thinking:
From the Founderâs Desk:
The importance of goal setting in product development to achieve Responsible AI by Abhishek Gupta
" ... sometimes we get excited about new technology and jump head-first into taking it to find a problem that we can solve with our shiny new toy.
The lack of clarity on why a team is working on a project can have consequences for adherence to responsible AI principles. The exercise of goal setting helps us foreground the reasons for doing a certain project.
This is the first step in making sure that we can centre responsible AI principles in the project and inject that in the foundations."
To delve deeper, read the full article here.
The Sociology of AI Ethics:
How Artifacts Afford: The Power and Politics of Everyday Things
Davisâs book is an accessible and succinct treatment of technology as a journey through social power and politics and serves as an invaluable guidebook for tech analysts, designers and builders willing to venture on that path. Davis traces the idea of âaffordanceâ through its intellectual history to show how, despite its multifarious meanings and (mis)uses, it remains a useful concept. She introduces the âmechanisms and conditions frameworkâ of affordances which specifies how technologies reflect and shape human social behaviour, giving us a transferrable tool for critical analysis and intentional design.
To delve deeper, read the full summary here.
đŹ Research summaries:
Artificial Intelligence and the Privacy Paradox of Opportunity, Big Data and The Digital Universe
Thanks to the pandemic, internet connectivity increasing, and companies more efficiently sharing our data, even our most private dataâŚisnât. This paper explores data privacy in an AI-enabled world. Data awareness has increased since 2019, but the fear remains that Smithâs findings will stay too relevant for too long.
To delve deeper, read the full summary here.
The AI Gambit â Leveraging Artificial Intelligence to Combat Climate Change
2020âs record-breaking temperatures renewed the push for technological solutions in tackling climate change. In this report, the authors offer recommendations for policymakers in determining AIâs potential to direct climate action without exacerbating the technologyâs own environmental impact.
To delve deeper, read the full summary here.
đ° Article summaries:
Myanmar: Facial Recognition System Threatens Rights (Human Rights Watch)
What happened: Under the guise of a âSafe Cityâ project, facial recognition technology has been deployed in 2 major cities in Myanmar. The technology was supplied by Huawei and other providers through CCTV infrastructure that is powered by AI capable of recognizing faces and license plates.Â
Why it matters: This is problematic because of the recent coup by the military junta and their suspension of several provisions stripping away fundamental rights in the country including the right to privacy. This will give the junta even more power to disperse any protests that might be organized further diminishing the rights of people.Â
Between the lines: Deployment of infrastructure without adequate public consultations and accountability mechanisms can lead to problems when control falls into the hands of malicious actors who can use something that was initially created with a different purpose to serve their own needs. With all the rapid approvals and deployments of similarly invasive technologies during COVID-19, we must pay attention and call out potential problems as early as possible to prevent situations like the one taking place in Myanmar right now.
Googleâs FLoC Is a Terrible Idea (Electronic Frontier Foundation)
What happened: In trying to move the world towards a more privacy-friendly posture, Google has proposed FLoC (Federated Learning of Cohorts) which is a mechanism that will get rid of third-party cookies from your browser (what advertisers use to track your web activity) and instead replace it with an identifier that you will share with other âsimilarâ users. The similarity is calculated using the SimHash algorithm and the FLoC scheme asserts that you will be part of a cohort that isnât too small to prevent you from being identified easily.Â
Why it matters: Youâre still being tracked. FLoC is computed as a summary of your recent web browsing history and you share that with some âsimilarâ users. For this to be useful to advertisers, it has to be able to provide some identification of you. This leaks information about you in the form of web activity that you most likely performed in the most recent timeframe. Since it groups you into cohorts, if you are part of a group like LGBTQ and donât want to be profiled or discriminated against, you donât have many options. Your recent behaviour summary is presented to all the advertisers who opt into this scheme.Â
Between the lines: What is evident from the GitHub page of the project is that with auxiliary information about the user (like when you log in to a website), correlating that with the FLoC ID will give even more information to the website that they can use to target you. In addition, since FLoC will use unsupervised learning to create the cohorts, it could end up aligning with racial categories. Google has said that it would rejig the groups if that starts to happen. How likely is that to happen? Weâll see.
A âsplinternetâ wonât solve global cyber defense problems (C4ISRNET)
What happened: Since 2019, at least 35 countries have placed bans on different apps that have threatened their conceptions of how they want to manage their regimes. The most recent incident comes with the swift ban that Clubhouse faced in China, and before that with TikTok in India a few months ago. Such a splintered internet exacerbates problems of global cybersecurity making it harder for countries to coordinate with each other. Unlike the physical world, the cyber world doesnât have boundaries that canât be scaled.Â
Why it matters: With the move of many more people to working remotely, cyber attacks have gained prominence since March 2020 and have led to significant breaches including the recent SolarWinds attack. Coordinating with other countriesâ cyber defense centres (only very few countries have mature capabilities) and sharing signatures globally is going to be crucial if weâre to move more and more of our lives into the digital realm.
Between the lines: Perhaps there is hesitancy on the part of various regimes that engage in this behaviour for 2 reasons. First that they want to continue to maintain tight control over âtheirâ cyberspace without much care for splintering. Second, given that some of these countries have advanced cyber offensive and defensive capabilities, they want to remain superior for strategic reasons and donât necessarily want to help others build up cyber capacity.Â
Robot artist Ai-da ready for her first art exhibition (TRT World)
What happened: A robot artist is being provided with a space to display 3 pieces at an art exhibition in May 2021 with the goal of helping people contemplate the relationship between humans and machines through self-portraits from an entity that doesnât have a sense of âselfâ.
Why it matters: Art is a powerful instrument for helping us examine issues that are ahead of our times. For example, what relationships might look like between humans and machines. And how machines â as they become more capable and gain more agency, can exhibit that in a way that is traditionally reserved for humans.
Between the lines: Some worrying concerns: the anthropomorphization of the robot artist, giving it a gender, and having it speak alongside other human artists who have real artistic agency. We might be getting ahead of ourselves in how we portray the capabilities of existing AI systems.
From elsewhere on the web:
The Role Africa can play in AI Ethics
Our Partnerships Manager, Connor Wright will be speaking at this event!
Amongst the top 100 websites, 99 of them are commercial. Only one is designed to perform any kind of public service â Wikipedia. In much the same way, left to its own devices, the development of AI will embody predominantly commercial values. How can we ensure that AI is developed in service of the public good?
đ
Mar 26, 2021 (Friday)
đ 5 AM â 7:00 AM EST
đŤ Get free tickets
Guest post:
If youâve got an informed opinion on the impact of AI on society, consider writing a guest post for our community â just send your pitch to support@montrealethics.ai. You can pitch us an idea before you write, or a completed draft.
In case you missed it:
Explaining and Harnessing Adversarial Examples
A bemusing weakness of many supervised machine learning (ML) models, including neural networks (NNs), are adversarial examples (AEs). AEs are inputs generated by adding a small perturbation to a correctly-classified input, causing the model to misclassify the resulting AE with high confidence. Goodfellow et al. propose a linear explanation of AEs, in which the vulnerability of ML models to AEs is considered a by-product of their linear behaviour and high-dimensional feature space. In other words, small perturbations on an input can alter its classification because the change in NN activation (as result of the perturbation) scales with the size of the input vector.
Identifying ways to effectively handle AEs is of interest for problems like image classification, where the input consists of intensity data for many thousands of pixels. A method of generating AEs called âfast gradient sign methodâ badly fools a maxout network, leading to a 89.4% error rate on a perturbed MNIST test set. The authors propose an âadversarial trainingâ scheme for NNs, in which an adversarial term is added to the loss function during training.Â
This dramatically improves the error rate of the same maxout network to 17.4% on AEs generated by the fast gradient sign method. The linear interpretation of adversarial examples suggests an approach to adversarial training which improves a modelâs ability to classify AEs, and helps interpret properties of AE classification which the previously proposed nonlinearity and overfitting hypotheses do not explain.Â
To delve deeper, read the full summary here.
Take Action:
Events:
Register to watch our Learning Community seminars!
Want to audit our learning community seminars and be a fly on the wall? Hereâs your chance: weâre live-streaming all of them via Zoom every Wednesday at 5PM, beginning this week. Come hang out and contribute to the discussion via chat.
đ
March 17th â April 28th (every Wednesday)
đ 5 PM â 6:30 PM EST
đŤ Get free tickets