AI Ethics Brief #95: Carative AI, algorithmic domination, AI carbon footprint, cascaded debiasing, and more ...
What are employee perceptions of the effective adoption of AI principles?
Welcome to another edition of the Montreal AI Ethics Institute’s weekly AI Ethics Brief that will help you keep up with the fast-changing world of AI Ethics! Every week we summarize the best of AI Ethics research and reporting, along with some commentary. More about us at montrealethics.ai/about.
⏰ This week’s Brief is a ~23-minute read.
Support our work through Substack
💖 To keep our content free for everyone, we ask those who can, to support us: become a paying subscriber for as little as $5/month. If you’d prefer to make a one-time donation, visit our donation page. We use this Wikipedia-style tipping model to support the democratization of AI ethics literacy and to ensure we can continue to serve our community.
*NOTE: When you hit the subscribe button, you may end up on the Substack page where if you’re not already signed into Substack, you’ll be asked to enter your email address again. Please do so and you’ll be directed to the page where you can purchase the subscription.
This week’s overview:
✍️ What we’re thinking:
The Watsons Meet Watson: A Call for Carative AI
🔬 Research summaries:
Employee Perceptions of the Effective Adoption of AI Principles
Algorithmic Domination in the Gig Economy
The AI Carbon Footprint and Responsibilities of AI Scientists
Cascaded Debiasing : Studying the Cumulative Effect of Multiple Fairness-Enhancing Interventions
📰 Article summaries:
What Google Search Isn’t Showing You
Bellingcat's Eliot Higgins Explains Why Ukraine Is Winning the Information War
Europe Is in Danger of Using the Wrong Definition of AI
📖 Living Dictionary:
What does blockchain have to do with AI Ethics?
🌐 From elsewhere on the web:
AI Experts Warn of Potential Cyberwar Facing Banking Sector
💡 ICYMI
The Technologists are Not in Control: What the Internet Experience Can Teach us about AI Ethics and Responsibility
But first, our call-to-action this week:
What can we do as a part of the AI Ethics community to help those who are suffering in the current war in Ukraine? We’re looking for suggestions for example on tips (actions that we can all take) to prevent the spread of misinformation, better understanding of the deployment of LAWS on the battlefield, etc.
✍️ What we’re thinking:
The Watsons Meet Watson: A Call for Carative AI
According to a fifth Watson— the famous nursing theorist Jean Watson—curing alone is not enough. Caring is the key to unlocking health. Her theory of human caring emphasizes carative factors of kindness and equanimity (in contrast to technical curative factors) that begin with treating all patients as they are and respecting their values, even if they are different from your own. And such caring is what I believe is lacking in how we approach the development and application of machine learning systems and artificial intelligence (AI) more broadly today.
To delve deeper, read the full article here.
🔬 Research summaries:
Employee Perceptions of the Effective Adoption of AI Principles
The proliferation of ethical issues related to the use of artificial intelligence (AI) has led many organizations to turn to self-regulatory initiatives, most commonly AI principles. This study investigates what organizations can do to effectively adopt these AI principles, in hopes of reducing the ethical issues related to the use of AI technologies in the future.
To delve deeper, read the full summary here.
Algorithmic Domination in the Gig Economy
Despite being seen as the solution to human bias, algorithms foster a relationship of domination and a power structure between bosses and workers. We explore how this is expressed within the gig economy and the subsequent precarious situation for its on-demand workers.
To delve deeper, read the full summary here.
The AI Carbon Footprint and Responsibilities of AI Scientists
Tamburrini approaches the critical, yet underrepresented, problem of AI science impact on the environment. He argues that a shift in the metrics by which AI research and development (R&D) success is measured to encompass environmental impact might well provide a means of distributed responsibility for our planet’s wellbeing.
To delve deeper, read the full summary here.
Cascaded Debiasing : Studying the Cumulative Effect of Multiple Fairness-Enhancing Interventions
Recent years have seen a huge surge in fairness enhancing interventions that focus on mitigating social biases at different stages of the ML pipeline instead of the entire pipeline. In this work, we undertake an extensive empirical study to investigate if fairness across the ML pipeline can be enhanced by applying multiple interventions at different stages of the ML pipeline and what might be its possible fallouts.
To delve deeper, read the full summary here.
📰 Article summaries:
What Google Search Isn’t Showing You
What happened: Drawing on commentary from an article that gained prominence on Hacker News about how Google Search is dying, this article exposes something that has faded into the background for a lot of us: how search is the gateway to discovering a lot of what the Internet has to offer, and how Google is the default for most people’s experience and how it might be reshaping our relationship with all that the internet has to offer. The author laments that search results seem to be much more commercially driven now and it feels as if each search query inundates one with information, but leaves you with a feeling of disorientation with none of the results being compelling enough.
Why it matters: Pointing to how we seek human recommendations when making a purchasing decision, the author found that sites like Reddit with curated communities sometimes provided far superior results which were just not possible with listicles and other results that featured prominently on Google that were clearly trying to game the algorithm to achieve prominence and dominate the first page of search results (because, let’s be honest, how many of us have actually navigated to the second, let alone the third or fourth pages?!) So, when we have deals like Google reportedly paying Apple $15b a year to remain the default search engine on their offerings, it begs to ask the question, what are we missing out on in terms of what the internet has to offer when we have such a strong filter with highly commercial interests skewing everything that we see?
Between the lines: There are alternate search engines available like DuckDuckGo that show similar results without all the intrusive tracking that happens when you use Google properties, but few users are aware of those offerings. In particular, changing the default on devices and browsers requires a few steps that are often obfuscated behind arcane menus and the less tech-savvy users amongst us never bother to go in and make any of those changes. If we’re talking about consuming a diversity of information, perhaps starting with changing our search engine settings could help us take a first, meaningful step in that direction.
Bellingcat's Eliot Higgins Explains Why Ukraine Is Winning the Information War
What happened: Bellingcat is a nonprofit, online collective dedicated to “a new field, one that connects journalism and rights advocacy and crime investigation” that was founded by Higgins to gather together open-source intelligence (OSINT) activities being done by volunteers and nonprofessionals online. Their work was pivotal in proving how Russian surface-to-air missile was responsible for downing a Malaysian Airlines aircraft that was shot down over Russian-controlled Ukraine in 2014. They were able to use Google Maps, social media posts, and dashcam footage to piece together a compelling case. The OSINT community has grown in strength since the time Bellingcat was founded and today has a few hundred people who contribute really high-quality inputs that they are able to use to drive accountability, both through short-term and quick journalistic pieces and more long-term analytical pieces.
Why it matters: Such OSINT work has been been critical in disarming the Russian disinformation tactics which have in previous conflicts proven to be quite effective. As Higgins points out, this is the first time they have seen their side winning, part of that success comes from an increase in the number of participants, but also a change in the dynamics of how social media is used, the ease of capturing metadata from camera images, the ability to archive information that can get erased online, and how world leaders such as the Ukrainian President have been using social media as well.
Between the lines: Creativity in using social media, not just to spread disinformation, but in this case curb its spread through systematic use of OSINT and coordinating the activities of many volunteers who are bringing different pieces of the puzzle together has been extremely effective to the point that the Russian campaign to spread any piece of problematic information (dis-, mis-, or malinformation) have mostly faltered. While those who still rely on state-sponsored TV to get their information continue to be vulnerable, at least those who rely on social media platforms have the potential to reduce exposure to problematic information due to the heroic efforts of the OSINT community.
Europe Is in Danger of Using the Wrong Definition of AI
What happened: GDPR set a global standard for what privacy means. With the upcoming EU AI Act, the definition being used for AI might do the same, and the author in this article argues that it might potentially be wrong. Proposed amendments to the definition in the original version of the Act open up a door for making it easy to legally exclude several kinds of commonly used techniques in machine learning from not falling under the remit of the Act and hence garnering an exception to more rigorous evaluation standards for public safety and welfare. Another problem with the new version of the definition is that developers and unscrupulous organizations could revert to using “simpler” AI techniques or strictly rule-based systems which would fall outside the remit of the Act and deliberately or inadvertently render harm on people with no recourse.
Why it matters: Regulations in the EU provide a single digital interface to a huge market of consumers. Getting the definition for something like AI that is becoming a more fundamental part of many new technological solutions then becomes essential, lest we set the wrong precedents. The current framing, language, and scope also only defines a very small set of areas (“identifying individuals in public spaces from their biometric data, creating social credit scores for governments, or producing toys that encourage dangerous behavior or self harm”) where the use of AI is banned but that still leaves many other areas which also require much more scrutiny without which there is still potential for great harm. Those cases are left for extant product safety and liability regulations.
Between the lines: The reason it becomes critical to get definitions right and potentially future-proof as well is that malicious actors in any ecosystem will try to game their way out of having to comply with regulations and onerous accounting and responsibility requirements. This ultimately harms the people who might not be able to discern subtle differences that such definition changes might have, feel that upcoming regulations will do them good, but ultimately get stuck with something that consumed a lot of resources getting passed through the legal system to only realize that it ends up doing more harm than good. Mobilizing political and rule-making machinery again for the same thing might be even more challenging if this doesn’t go right the first time.
📖 From our Living Dictionary:
“What does blockchain have to do with AI Ethics?”
👇 Learn more about why it matters in AI Ethics via our Living Dictionary.
🌐 From elsewhere on the web:
AI Experts Warn of Potential Cyberwar Facing Banking Sector
U.S. financial institutions’ machine-learning models are a potential avenue for attacks, experts said
Our founder, Abhishek Gupta, is featured in this piece by the Wall Street Journal talking about machine learning security and its implications for the financial services industry.
💡 In case you missed it:
Artificial intelligence has recently emerged from its most recent winter. Many technical researchers are now facing a moral dilemma as they watch their work find its way out of the lab and into our lives in ways they had not intended or imagined but more importantly, in ways they find objectionable.
The atomic bomb is a classic example that many commentators on contemporary technologies refer to when discussing ethics and responsibility. But a more recent and relevant example that I would like to draw lessons from is the Internet–a foundational technology that has reached maturity and is fully embedded in society.
My focus is not on the specific social issues per se, e.g., net neutrality or universal access, rather, my goal is to provide a glimpse into some of the dynamics associated with the Internet’s transition from lab to market as experienced by one prominent member of the research community, Dr. David Clark, Senior Research Scientist at MIT’s Computer Science and Artificial Intelligence Lab (CSAIL).
Clark has been involved with the development of the Internet since the 1970s. He served as Chief Protocol Architect and chaired the Internet Activities Board throughout most of the 80s, and more recently worked on several NSF-sponsored projects on next generation Internet architecture. In his 2019 book, Designing an Internet, Clark looks at how multiple technical, economic, and social requirements shaped and continue to shape the character of the Internet.
In discussing his lifelong work, Clark makes an arresting statement: “The technologists are not in control of the future of technology.” In this interview, I explore the significance of those words and how they can inform today’s discussions on AI ethics and responsibility.
To delve deeper, read the full article here.
Take Action:
What can we do as a part of the AI Ethics community to help those who are suffering in the current war in Ukraine? We’re looking for suggestions for example on tips (actions that we can all take) to prevent the spread of misinformation, better understanding of the deployment of LAWS on the battlefield, etc.