

Discover more from The AI Ethics Brief
The AI Ethics Brief #52: AI poetry, digital technical objects, EU AI regulations, and more ...
Can you tell human and AI poetry apart?
Welcome to another edition of the Montreal AI Ethics Institute’s weekly AI Ethics Brief that will help you keep up with the fast-changing world of AI Ethics! Every week we summarize the best of AI Ethics research and reporting, along with some commentary. More about us at montrealethics.ai/about.
⏰ This week’s Brief is a ~12-minute read.
Support our work through Substack
💖 To keep our content free for everyone, we ask those who can, to support us: become a paying subscriber for as little as $5/month. If you’d prefer to make a one-time donation, visit our donation page. We use this Wikipedia-style tipping model to support the democratization of AI ethics literacy and to ensure we can continue to serve our community.
*NOTE: When you hit the subscribe button, you may end up on the Substack page where if you’re not already signed into Substack, you’ll be asked to enter your email address again. Please do so and you’ll be directed to the page where you can purchase the subscription.
This week’s overview:
✍️ What we’re thinking:
From the Founder’s Desk: Systems Design Thinking for Responsible AI
🔬 Research summaries:
One Map to Rule Them All? Google Maps as Digital Technical Object
AI vs. Maya Angelou: Experimental Evidence That People Cannot Differentiate AI-Generated From Human-Written Poetry
📰 Article summaries:
EU to launch first AI regulations (Unite.AI)
How Facebook’s Ad System Lets Companies Talk Out of Both Sides of Their Mouths (The Markup)
Blood, Poop, and Violence: YouTube Has a Creepy Minecraft Problem (Wired)
Error-riddled data sets are warping our sense of how good AI really is (MIT Tech Review)
But first, our call-to-action this week:
Read The Business Case For AI Ethics: Moving From Theory To Action (All Tech Is Human)
How do you enact change inside an organization to operationalize AI ethics? All Tech Is Human, an organization that is building the Responsible Tech pipeline, has assembled a diverse range of over 100 collaborators to tackle this complex question.
This first draft features an outline of their framework and direction, along with 28 interviews with leaders in the Responsible AI space, including our founder Abhishek Gupta, who wrote the foreword.
✍️ What we’re thinking:
From the Founder’s Desk:
Systems Design Thinking for Responsible AI by Abhishek Gupta
There is rarely a bare AI system. Typically, AI capabilities are included as a subcomponent of a larger product or service offering. Such a product or service offering exists in a social ecosystem where there are specificities of culture and context.
So, when thinking about building an AI system and the impact that it might have on society, it is important to take a systems design thinking approach to be as comprehensive as possible in assessing the impacts and proposing redressal mechanisms.
To delve deeper, read the full article here.
🔬 Research summaries:
One Map to Rule Them All? Google Maps as Digital Technical Object
Few among us can now navigate unknown spaces without relying on the assistance of digital maps. Scott McQuire reveals how Google Maps operates as a digital technical object that works to reconfigure our understanding of time, space and contemporary social life.
To delve deeper, read the full summary here.
Can we tell the difference between a machine-generated poem and a human-written one? Do we prefer one over the other? Researchers Nils Kobis and Luca D. Mossink examine these questions through two studies observing human behavioural reactions to a natural language generation algorithm, OpenAI’s Generative Pre-Training (GPT-2).
To delve deeper, read the full summary here.
📰 Article summaries:
EU to launch first AI regulations (Unite.AI)
What happened: In a leaked draft document, we get a glimpse of some proposed regulations for AI in the EU. This would be the first such regulations that place clear limits on the use of AI in a wide range of high-risk contexts. Such red-lined zones include the uses of AI in credit scoring, criminal justice, and the provision of social services. It also prohibits the use of AI to manipulate decisions, behaviours, and opinions of people which would put commercial and political manipulation at risk.
Why it matters: Just as was the case with the GDPR, this has the potential to rewire the industry in terms of the ethical use of AI. It will push the industry towards being more careful and deliberate in their use of AI, and perhaps even steer them out of wanting to develop and test systems in these domains in the first place. The financial fines are in line with those under the GDPR.
Between the lines: As of now, the jurisdiction remains unclear, much more so than was the case with GDPR because data in an AI context might be used to train the AI system elsewhere and then distributed for consumption, making the questions of scope murky.
How Facebook’s Ad System Lets Companies Talk Out of Both Sides of Their Mouths (The Markup)
What happened: The Markup discovered that companies are using different messages based on political leanings to target people on Facebook. While targeted advertising is not new, what was interesting to observe here was the use of radically different phrasing and messages based on whether someone leaned conservative or liberal.
Why it matters: The degree of granularity offered by Facebook to target people is quite significant and companies like Comcast and Exxon Mobil that don’t have the best public images can use these kinds of advertising tools to ameliorate their image. They do so by phrasing text and creating images that are likely to appeal to specific audiences. While this was still done to a certain extent before, the granularity of targeting and variations in messaging is much more acute now.
Between the lines: Continued pressure from organizations like The Markup uncovering such uses along with tools like the Citizen Browser and the NYU Ad Observatory will play an important role in bringing to light the long road that we still have ahead. Funding more open source tooling and studies will be another essential arrow in our quiver.
Blood, Poop, and Violence: YouTube Has a Creepy Minecraft Problem (Wired)
What happened: Investigation by Wired unveiled that there are disturbing thumbnails on highly viewed videos on YouTube under innocuous topics like Minecraft and Among Us, which are primarily played by children. While the video content didn’t contain as much inappropriate content as some previous scandals like Elsagate, the thumbnails are still prominently present in easily accessible places frequented by children.
Why it matters: YouTube has consistently struggled with moderating content and its inability to do so effectively in the case of children is particularly egregious. With the pandemic hitting busy parents hard, a lot of them have been relying on letting their children watch YouTube as a way of catching a break. Problematic content like this that can show up easily on children’s screens is a grave danger.
Between the lines: YouTube Kids is supposed to be the clean, child-friendly version of the website but in the past with incidents like Elsagate, we’ve seen that it isn’t immune to adversarial manipulation and requires ample work before it becomes a place where children can go unescorted.
Error-riddled data sets are warping our sense of how good AI really is (MIT Tech Review)
What happened: Researchers from MIT discovered large errors in standard datasets that are used to benchmark the performance of AI systems. Datasets like ImageNet and Quickdraw are estimated to have ~6% and ~10% errors in the labels. While large datasets are already known to have sexist and racist labels, wrong labels in neutral categories exacerbate the problem.
Why it matters: Performance evaluation and model selection are done on the basis of metrics evaluated against such benchmarks and if they have incorrect labels (in the context of supervised learning), we get an inflated sense of the capability of the system. What they found out was that simpler models performed much better than complex models when the erroneous labels were corrected, strengthening the case for simplicity in our modeling approaches.
Between the lines: Data hygiene, which is the practice of ensuring that the datasets we use are clean and correct, is an important facet of good machine learning practices. It will also have ethical implications especially when used in high-impact scenarios and must be prioritized in any AI systems development.
From elsewhere on the web:
The Business Case For AI Ethics: Moving From Theory To Action (All Tech Is Human)
How do you enact change inside an organization to operationalize AI ethics? All Tech Is Human, an organization that is building the Responsible Tech pipeline, has assembled a diverse range of over 100 collaborators to tackle this complex question.
This first draft features an outline of their framework and direction, along with 28 interviews with leaders in the Responsible AI space, including our founder Abhishek Gupta, who wrote the foreword.
To delve deeper, read the full report here.
In case you missed it:
A Snapshot of the Frontiers of Fairness in Machine Learning
In this succinct review of the scholarship on Fair Machine Learning(ML), Chouldechova and Roth outline the major strides taken towards understanding algorithmic bias, discuss the merits and shortcomings of proposed approaches, and present salient open questions on the frontiers of Fair ML. These include- statistical vs individual notions of Fairness, the dynamics of fairness in socio-technical systems, and the detection and correction of algorithmic bias.
To delve deeper, read the full summary here.
Take Action:
Help us improve this newsletter — give us feedback!
We just want to know 4 things:
Why do you read this newsletter?
What do you like most about it?
What would you like more of?
What should we do less of, or stop doing?
Events:
Self-disclosure for an AI product: A practical workshop and feedback session
We’re partnering with Open Ethics to host a discussion about self-disclosure for participants to learn about, and for companies to demonstrate their AI product self-disclosure process. This includes looking at how an AI product was built and the downstream effects that may have on people using it.
📅 May 13th (Thursday)
🕛 1:00PM–2:30PM EST
🎫 Get free tickets