AI Ethics Brief #75: Artificial Intellichef, open-source software shaping AI policy, and more ...
Can we have AI ethics without ethics?
Welcome to another edition of the Montreal AI Ethics Institute’s weekly AI Ethics Brief that will help you keep up with the fast-changing world of AI Ethics! Every week we summarize the best of AI Ethics research and reporting, along with some commentary. More about us at montrealethics.ai/about.
⏰ This week’s Brief is a ~14-minute read.
Support our work through Substack
💖 To keep our content free for everyone, we ask those who can, to support us: become a paying subscriber for as little as $5/month. If you’d prefer to make a one-time donation, visit our donation page. We use this Wikipedia-style tipping model to support the democratization of AI ethics literacy and to ensure we can continue to serve our community.
*NOTE: When you hit the subscribe button, you may end up on the Substack page where if you’re not already signed into Substack, you’ll be asked to enter your email address again. Please do so and you’ll be directed to the page where you can purchase the subscription.
This week’s overview:
✍️ What we’re thinking:
Will an Artificial Intellichef be Cooking Your Next Meal at a Michelin Star Restaurant?
🔬 Research summaries:
You cannot have AI ethics without ethics
📰 Article summaries:
How open-source software shapes AI policy
Minority Voices ‘Filtered’ Out of Google Natural Language Processing Models
AI industry, obsessed with speed, is loathe to consider the energy cost in latest MLPerf benchmark
Facebook Rolls Out News Feed Change That Blocks Watchdogs from Gathering Data
📖 Living Dictionary:
Reinforcement Learning
🌐 From elsewhere on the web:
The Imperative for Sustainable AI Systems
💡 ICYMI
Aging in an Era of Fake News
But first, our call-to-action this week:
The MAIEI Learning Community cohort report is live now!
The chapters titled
“Design and Techno-isolationism”,
“Facebook and the Digital Divide: Perspectives from Myanmar, Mexico, and India”,
“Future of Work”, and
“Media & Communications & Ethical Foresight”
will hopefully provide with you novel lenses to explore this domain beyond the usual tropes that are covered in the domain of AI ethics.
✍️ What we’re thinking:
Will an Artificial Intellichef be Cooking Your Next Meal at a Michelin Star Restaurant?
With various innovative solutions like Flippy popping up on the market, the benefits of AI in the restaurant industry all seem to stem from its efficiency and precision. When working properly, AI can increase savings and improve food safety, which is especially important as we navigate this COVID-19 era. Although these aspects are vital to the success of any restaurant, the true magic happens inside the kitchen where chefs cook delicious meals that often reflect unique social, cultural, and environmental influences. With this in mind, does AI have a place in the kitchen to support the creative process of professional chefs who have dedicated their life to learning the techniques and intricacies of high-quality cooking?
To delve deeper, read the full article here.
🔬 Research summaries:
You cannot have AI ethics without ethics
AI systems are often fixed by looking for the broken part, rather than the system that allowed the error to occur. The paper advocates for a more systematic examination of the AI process, which the more you think about it, the more sense it makes.
To delve deeper, read the full summary here.
📰 Article summaries:
How open-source software shapes AI policy
What happened: This article dives into details about how the open-source software (OSS) ecosystem operates and the implications that it has on how policies are developed for the governance of AI systems. It identifies a gap in the current policy initiatives on examining the role of OSS in the power dynamics of the AI ecosystem. Notably, most policy discussions today focus on technology, data, talent, funding, but rarely do they look at how the OSS ecosystem impacts all of these factors. OSS provides benefits like speeding up AI adoption, bringing more transparency to the code bases used in products and services, and helps to accelerate fundamental advances in a lot of fields by making AI capabilities more accessible. But, this also has negative impacts in terms of the competitiveness in the market for AI solutions and sets standards implicitly, operating outside the purview of standards setting bodies that would typically help to bring counterweight to the development of the tools and methodologies in the domain.
Why it matters: The article highlights how the current ecosystem for AI frameworks is dominated by Google and Facebook through Tensorflow and PyTorch respectively. It is not a new phenomenon since both these companies have also published the popular Angular.js and React.js that dominate frontend web development frameworks. What is interesting on closer examination is that most of the core developers on Tensorflow and PyTorch still come from Google and Facebook giving them a much stronger implicit say in how the code develops in the future and thus potentially shaping also the standards that might follow since we would be locked into how these frameworks structure and operate.
Between the lines: OSS contributors need to be paid and the funding for that needs to come from somewhere. A lot of OSS projects end up being abandoned or suffer when there isn’t adequate funding to compensate the contributors for their efforts and they choose to work on other things that help them pay their bills. If we talk about true democratization of tooling in OSS, we need to strongly consider whether we can reshape the structure of the ecosystem as it exists today towards something where there are perhaps external grants that are more widely available that allow anyone to sustainably contribute to such projects helping to bring more diversity to the contributors list. Until then, we at least do have access to such tooling benefitting from the investments made by corporate benefactors.
Minority Voices ‘Filtered’ Out of Google Natural Language Processing Models
What happened: The article spotlights some findings from a recently published report that analyzed the filters that went into creating the C4 (Colossal Clean Crawled Corpus) dataset, a subset of the much larger Common Crawl (CC) dataset. The C4 was used to train Google’s T5 and Switch Transformer, two massive language models that are used in downstream products and services. The essence of the findings were that in creating a non-toxic dataset, the aggressive filtering excluded material related to LGBTQ+ communities in non-sexual and non-offensive contexts along with a heavy filtering of colloquial and ethnicity-specific dialects like African-American and Hispanic-aligned English.
Why it matters: One of the reasons for poor performance of large language models on non-political, non-offensive, non-sexual material that discusses LGBTQ+ communities is that there is no representation of them in these curated datasets, or when it is there, it is heavily filtered. This has the impact of much stronger automated content moderation applied to that content compared to others on social media platforms. Other products and services that also consume such pretrained models for operations then suffer from biases because data related to these areas and dialects is excluded, rather than coming up with better approaches to moderation that don’t just rely on a banned list of words.
Between the lines: One of the striking things about the research efforts that led to the report is that they’ve made the raw data available for C4 and provided different versions of it with various levels of filtering applied for people to further analyze the data. Even though the original authors of C4 (from Google) made the scripts available for that dataset, the computational costs are so high, that recreating the C4 from CC would be out of reach for many researchers. Not only do these minority communities suffer from the biases against them in content moderation, because of such misdirected filtering, they stand to miss out on legitimate benefits from ML like machine translation and search. As the authors of the study rightly pointed out, we need to do better in terms of how we process data because it has significant downstream effects.
AI industry, obsessed with speed, is loathe to consider the energy cost in latest MLPerf benchmark
What happened: In the latest MLPerf benchmark results, a benchmark that compares hardware performance in the domain of AI, with the rise in performance capabilities of the chips submitted, there was a notable drop in the reported values for energy consumption. The article posits that manufacturers are more interested in selling the high performance of their systems, and consider the energy efficiency as a secondary outcome, hence when asked to make tradeoffs, they choose to lean towards the former.
Why it matters: We’ve spoken about the environmental impact of AI, in our State of AI Ethics Report and in “The Imperative for Sustainable AI Systems”, and information about the energy consumption of the physical infrastructure used to power AI applications is essential in guiding the actions of the practitioners in choosing an appropriate solution, something that we’ve highlighted in “The current state of affairs and a roadmap for effective carbon-accounting tooling in AI.” Without that information, it is difficult to assess the energy efficiency of different systems without trying them all out, instrumenting them, and reporting the results, an exercise that manufacturers can do for a fraction of the cost.
Between the lines: With an overheated market for the very essential hardware that powers the booming AI market, it is understandable that manufacturers want to emphasize the performance of their hardware, rather than draw attention to the massive energy consumption of these chips. What lies in our control though is to demand that we be provided that information, and use our purchasing power to shape the market by rewarding those manufacturers who do, in essence setting a new status quo where reporting energy figures becomes the norm.
Facebook Rolls Out News Feed Change That Blocks Watchdogs from Gathering Data
What happened: In yet another blow to researchers who utilize data from Facebook to study its impacts on society, the platform has rolled out code changes by injecting superfluous elements into its website that make it even more difficult for research projects to operate and gather the necessary data that fuels their efforts to study, for example, how disinformation spreads on the platforms and biases in the kinds of advertisements shown to people of different demographic groups. The change has impacted the Citizen Browser project from The Markup, the team at the NYU Ad Observatory amongst others.
Why it matters: Not only are researchers affected, but screen readers, that rely on the HTML tags that have now been injected with junk code to foil these attempts, have seen performance problems making the site much less accessible for the visually impaired who rely on screen readers, and consequently the tags to navigate the website. This is not the first time that a change in the Facebook website code base has had an impact on accessibility. This violates some of the tenets of accessible web design all in the interest of decreasing transparent access to ad distribution on the platform and reducing the efficacy of ad blockers that people use.
Between the lines: One of the points highlighted in the article aptly sums up the current state of affairs: Facebook is working against researchers rather than with them, and this is only going to make problems worse. As pointed out by the article, there was also another instance this year when Facebook corrected previously supplied data about misinformation on the platform only after someone noticed a discrepancy in a report published by Facebook and the open data that they had made available, this potentially has impacts on years of research efforts. Moving away from an adversarial dynamic will be essential if we want to achieve the goal of having a healthier ecosystem.
From our Living Dictionary:
‘Reinforcement Learning’
👇 Learn more about why it matters in AI Ethics via our Living Dictionary.
From elsewhere on the web:
The Imperative for Sustainable AI Systems
AI systems have a massive carbon footprint. This carbon footprint also has consequences in terms of social justice as we will explore in this article. Here, we use sustainability to talk about not just environmental impact, but also social justice implications and impacts on society. Though an important area, we don’t use the term sustainable AI here to mean applying AI to solve environmental issues. Instead, a critical examination of the impacts of AI on the physical and social environment is the focus of our discussion.
To delve deeper, read the full article here.
FRONT OFFICE DIRECTORATE: Chief Responsible AI Program, GG 15
The JAIC is looking for a Professional to join the Front Office to lead the DoD Responsible Artificial Intelligence (RAI) Program. In this role, you will lead the development and promulgation of policies, practices, guidance, and metrics for the U.S. Department of Defense for the responsible development, procurement and use of AI-capabilities.
To know more, please view details on the position here.
In case you missed it:
With the release of Netflix’s Social Dilemma, the upcoming U.S. elections, and persistent COVID-19 conspiracy theorists and deniers, online misinformation has resurfaced in the public debate serious threat to public safety and democracy. Lessons learned from the 2016 U.S. elections showed that older adults were the most prone to sharing fake news, with cognitive decline being the commonly cited explanation for this behaviour.
Brashier and Schacter argue that other factors such as greater trust, difficulty detecting lies, a lower emphasis on accuracy when communicating, and unfamiliarity with social media, are also to consider when accounting for how older generations evaluate news. Reducing fake news shares and increasing digital literacy among older adults is key to maintaining a healthy and informed civic society. Older adults had a 70.9% turnout at the last election compared to 46.1% among millennials. They merit more targeted strategies to effectively reduce the share of fake news online.
To delve deeper, read the full summary here.
Take Action:
The MAIEI Learning Community cohort report is live now!
The chapters titled
“Design and Techno-isolationism”,
“Facebook and the Digital Divide: Perspectives from Myanmar, Mexico, and India”,
“Future of Work”, and
“Media & Communications & Ethical Foresight”
will hopefully provide with you novel lenses to explore this domain beyond the usual tropes that are covered in the domain of AI ethics.