AI Ethics Brief #98: ML and remote patient monitoring, self-driving cars' ethics, research clusters in AI safety, and more ...
Does the use of AI in healthcare provide ease or ethical dilemmas?
Welcome to another edition of the Montreal AI Ethics Institute’s weekly AI Ethics Brief that will help you keep up with the fast-changing world of AI Ethics! Every week we summarize the best of AI Ethics research and reporting, along with some commentary. More about us at montrealethics.ai/about.
⏰ This week’s Brief is a ~29-minute read.
Support our work through Substack
💖 To keep our content free for everyone, we ask those who can, to support us: become a paying subscriber for as little as $5/month. If you’d prefer to make a one-time donation, visit our donation page. We use this Wikipedia-style tipping model to support the democratization of AI ethics literacy and to ensure we can continue to serve our community.
*NOTE: When you hit the subscribe button, you may end up on the Substack page where if you’re not already signed into Substack, you’ll be asked to enter your email address again. Please do so and you’ll be directed to the page where you can purchase the subscription.
This week’s overview:
✍️ What we’re thinking:
How Machine Learning Can Enhance Remote Patient Monitoring
Artificial Intelligence in healthcare: providing ease or ethical dilemmas?
The Ethical Considerations of Self-Driving Cars
🔬 Research summaries:
Exploring Clusters of Research in Three Areas of AI Safety
Study of Competition Issues in Data-Driven Markets in Canada
The 28 Computer Vision Datasets Used in Algorithmic Fairness Research
The Social Metaverse: Battle for Privacy
📰 Article summaries:
U.S. cities are backing off banning facial recognition as crime rises
Facebook Promised to Remove “Sensitive” Ads. Here’s What It Left Behind
Magic Numbers
📖 Living Dictionary:
What is the relevance of anthropomorphism to AI ethics?
🌐 From elsewhere on the web:
The EU AI Act: What you need to know
💡 ICYMI
UK’s roadmap to AI supremacy: Is the ‘AI War’ heating up?
But first, our call-to-action this week:
We’d love to hear from you, our readers, on what recent research papers caught your attention. We’re looking for ones that have been published in journals or as a part of conference proceedings.
✍️ What we’re thinking:
How Machine Learning Can Enhance Remote Patient Monitoring
Health has been top of mind for many of us, especially during the last 2 years. It is no surprise that fears of catching COVID-19 during in-person medical visits have led to a greater interest in using technology to receive health care, with almost three-quarters of Americans stating that the pandemic has made them more eager to try virtual care. Increased consumer and provider willingness to use technology has meant that the use of telehealth has grown 38x from the pre-COVID baseline, with around 3X the level of venture capital investment in the area of digital health in 2020 compared to 2017.
Before exploring the role of AI in healthcare, it is important to point out the difference between the terms ‘telehealth’ and ‘telemedicine.’ Consider telehealth to be the larger umbrella term encompassing the broad scope of “electronic and telecommunications technologies and services used to provide care and services at-a-distance.” Telemedicine is a subset of telehealth that specifically refers to the practice of medicine using technology to deliver care at a distance. Essentially, the main difference lies in the fact that telemedicine solely pertains to remote clinical services, while telehealth can also include remote non-clinical services, such as provider training, patient education and administrative meetings.
To delve deeper, read the full article here.
Artificial Intelligence in healthcare: providing ease or ethical dilemmas?
Artificial intelligence (AI) has been introduced into a number of diverse industries that are integral to society. Healthcare, in particular, has benefited from the use of AI as the technology has been used to complete more mundane tasks, such as automating personal medical histories and diagnosis through medical imaging. [1] These tasks, which previously diverted experts’ attention from medical emergencies, highlight the constructive ways in which AI has been used in our society. Although AI in healthcare can present certain benefits, there are ethical implications to the use of machine learning, particularly regarding the lack of transparency and patient-doctor connection that AI represents. As AI becomes more prevalent within healthcare, these ethical dilemmas must be combatted to ensure the integrity of the industry.
To delve deeper, read the full article here.
The Ethical Considerations of Self-Driving Cars
Self-driving cars are closer to going mainstream than they have ever been before. As autonomous vehicle technology advances, serious ethical concerns are surfacing.
Who is responsible when a self-driving car gets into an accident? How should engineers program autonomous vehicles for accident situations? The answers to these questions will shape the future of self-driving vehicles and AI in general.
To delve deeper, read the full article here.
🔬 Research summaries:
Exploring Clusters of Research in Three Areas of AI Safety
Problems of AI safety are the subject of increasing interest for engineers and policymakers alike. This brief uses the CSET Map of Science to investigate how research into three areas of AI safety — robustness, interpretability and reward learning — is progressing. It identifies eight research clusters that contain a significant amount of research relating to these three areas and describes trends and key papers for each of them.
To delve deeper, read the full summary here.
Study of Competition Issues in Data-Driven Markets in Canada
A new independent working paper urges the Ministry of Innovation, Science, and Economic Development to address data-driven dominance. Ahead of the report’s release, Minister François-Philippe Champagne said that he would “carefully evaluate potential ways to improve [the] operation” of the Competition Act, including “adapting the law to today’s digital reality to better tackle emerging forms of harmful behaviour in the digital economy.”
To delve deeper, read the full summary here.
The 28 Computer Vision Datasets Used in Algorithmic Fairness Research
Access to well-documented, high-quality datasets is crucial to effective algorithmic fairness research, yet in many sub-fields of AI/ML, dataset documentation is insufficient and scattered. Fabris et. al survey and clearly document over 200 datasets employed in algorithmic fairness research from 2014 to mid-2021. Here, we highlight the 28 computer vision datasets from this survey.
To delve deeper, read the full summary here.
The Social Metaverse: Battle for Privacy
Our lives now extend to both the physical and digital worlds. While we have given up some of our privacy for security reasons before, the metaverse asks for an even greater sacrifice.
To delve deeper, read the full summary here.
📰 Article summaries:
U.S. cities are backing off banning facial recognition as crime rises
What happened: As crime rises in the United States, cities are revisiting the debate around facial recognition. Over the last couple years, about two dozen U.S. state or local governments passed laws restricting facial recognition due to racial bias. However, the argument is being made that technology is needed to solve crimes and hold individuals accountable.
Why it matters: Ongoing research by the federal government's National Institute of Standards and Technology (NIST) has shown significant industry wide progress in accuracy. The shifting sentiment to address concerns about facial recognition technology, while ensuring it is used in a bounded, accurate and nondiscriminatory way, will benefit companies like Clearview AI, Idemia and Motorola Solutions. With state and local governments spending $124 billion on policing annually, the stakes are high.
Between the lines: As is the case with most debates, each side certainly points out valid arguments. However, one that particularly stood out in this article was that "addressing discriminatory policing by double-checking the algorithm is a bit like trying to solve police brutality by checking the gun isn't racist: strictly speaking it's better than the alternative, but the real problem is the person holding it."
Facebook Promised to Remove “Sensitive” Ads. Here’s What It Left Behind
What happened: To what extent is ad targeting still available on Facebook’s platform? Facebook’s announcment last year that companies buying ads would no longer be able to target people based on interest categories like race, religion, politics, or sexual orientation seemed to be a step in the right direction. However, the platform company hasn’t explained how it determines what advertising options are “sensitive.”
Why it matters: Facebook’s statements on this issue revolve around the fact that the removal of sensitive targeting options is an ongoing process that requires “constant review.” However, Facebook has a track record of promising to implement meaningful changes only to fall short of the pledge. Research has shown problems with advertising “proxies” for years, and the company failed to make a clear stance.
Between the lines: Facebook has made a series of design decisions, such as promoting ads into our newsfeed, making content more shareable and depending on algorithmic content moderation, that have directly impacted the circulation of information. In particular, advertising revenue now fuels many of the "free" services we use, which has allowed digital platforms to monetize both our attention and our data.
What happened: There has been a rise of spiritual and predictions-related content on TikTok that reframes how people perceive their relationship with the platform and reshapes their interactions with content creators. Creators put out content with messages like how their piece of content was meant to find the viewer, drawing on concepts like kismet obscuring the invasions of privacy and coercive design that lead to the success of “For You” algorithmic feeds feeling incredibly personalized. This draws on an (unfortunately) rich history of voodoo science to confound people on how systems actually work, discouraging them from finding out by saying that they’re supposed to have faith in the benefits that it can bring, rather than questioning the nature of the systems and reality surrounding them.
Why it matters: The pandemic accelerated this trend where content creators, rewarded by algorithmic systems that encourage clickbait and measure engagement through superficial digital interactions, continued to leverage the feeling of loss of agency and fatalism as a way to reinforce the ideas of platforms and algorithms knowing us better than we know ourselves. It doesn’t help either that this feeds right into the narrative that the companies behind these systems want to encourage, which is that they have built something value-neutral that doesn’t exploit dark design patterns and other subversive methods to achieve their goals, namely keeping users hooked on to their services.
Between the lines: Technological fatalism is an attractive idea, especially as people feel a sense of loss of control over their lives (as the pandemic has upended the “normal” for example). Yet, it is exactly at such moments, we must stop and question whether we are serving the platform or it is serving us. Those who are already marginalized are made empty promises of technology being a liberator that will solve the toughest challenges that they face, yet they end up as the ones facing the brunt of discrimination and a further stripping of their ability to self-determine.
📖 From our Living Dictionary:
What is the relevance of anthropomorphism to AI ethics?
👇 Learn more about why it matters in AI Ethics via our Living Dictionary.
🌐 From elsewhere on the web:
The EU AI Act: What you need to know (ZDNet)
Our work was featured in this article by ZDNet on the EU AI Act.
As noted by the Montreal AI Ethics Institute, the European Commission has chosen a broad and neutral definition of AI systems, designating them as software "that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with".
💡 In case you missed it:
UK’s roadmap to AI supremacy: Is the ‘AI War’ heating up?
The UK’s first National Artificial Intelligence Strategy was presented to the Parliament by Nadine Dorries, Secretary of State for Digital, Culture, Media and Sport by Command of Her Majesty on September 22, 2021. The highlight of the strategy is the highly ambitious ten-year plan ‘to make Britain a global AI superpower’. Further, according to Dorries, ‘this strategy will signal to the world UK’s intention to build the most pro-innovation regulatory environment in the world’.
To delve deeper, read the full article here.
Take Action:
We’d love to hear from you, our readers, on what recent research papers caught your attention. We’re looking for ones that have been published in journals or as a part of conference proceedings.