AI Ethics Brief #88: GDPR & app tracking, FRT in India, ethics of BCI, conversational AI for social good, and more ...
Would you be willing to upload the likeness of your face to access your government tax account?
Welcome to another edition of the Montreal AI Ethics Institute’s weekly AI Ethics Brief that will help you keep up with the fast-changing world of AI Ethics! Every week we summarize the best of AI Ethics research and reporting, along with some commentary. More about us at montrealethics.ai/about.
⏰ This week’s Brief is a ~23-minute read.
Support our work through Substack
💖 To keep our content free for everyone, we ask those who can, to support us: become a paying subscriber for as little as $5/month. If you’d prefer to make a one-time donation, visit our donation page. We use this Wikipedia-style tipping model to support the democratization of AI ethics literacy and to ensure we can continue to serve our community.
*NOTE: When you hit the subscribe button, you may end up on the Substack page where if you’re not already signed into Substack, you’ll be asked to enter your email address again. Please do so and you’ll be directed to the page where you can purchase the subscription.
This week’s overview:
🔬 Research summaries:
Before and after GDPR: tracking in mobile apps
Declaration on the ethics of brain-computer interfaces and augment intelligence
Conversational AI Systems for Social Good: Opportunities and Challenges
The increasing footprint of facial recognition technology in Indian law enforcement – pitfalls and recommendations
📰 Article summaries:
IRS Will Require Facial Recognition Scans to Access Your Taxes
India’s tech sector has a caste problem
Apple AirTags - 'A perfect tool for stalking'
📖 Living Dictionary:
Open Source
🌐 From elsewhere on the web:
Seven AI ethics experts predict 2022’s opportunities and challenges for the field
💡 ICYMI
Hazard Contribution Modes of Machine Learning Components
But first, our call-to-action this week:
Keeping in line with our mission to democratize AI ethics literacy, we want to create new opportunities to feature writing from our community members!
If you would like to have your work featured on our website and included in this newsletter to reach thousands of technical and policy leaders in AI ethics, reach out to us!
We are currently looking for research summary writers and will open up more writing opportunities in the coming months for regular contributors.
🔬 Research summaries:
Before and after GDPR: tracking in mobile apps
The GDPR was introduced with great hopes in 2018, but has it actually changed the data practices of mobile apps? Our paper analyses tracking, a common and invasive data practice in mobile apps, and finds that change has been limited so far.
To delve deeper, read the full summary here.
Declaration on the ethics of brain-computer interfaces and augment intelligence
Although brain-computer interface (BCI) technology has a great future ahead, but at the same time, it is not free from risks and other ethical concerns associated with it. This paper offers suggestions on ways to develop human-centric and sustainable BCI that leads to overall progress.
To delve deeper, read the full summary here.
Conversational AI Systems for Social Good: Opportunities and Challenges
Conversational artificial intelligence, or ConvAI, has the potential to advance the United Nations’ Sustainable Development Goals (SDGs) for social good due to the technology’s current deployment in various industries and populations. This paper analyzes the challenges that existing and exploratory ConvAI systems may face while advancing social good initiatives.
To delve deeper, read the full summary here.
The Centre for Applied Law & Technology Research (ALTR) which is part of the Vidhi Centre for Legal Policy, recently published its third and final working paper in a three part series. The objective behind this working papers’ series is to discuss how facial recognition (FRT) is being deployed at the state level by local law enforcement. India joins a growing number of countries across the globe aiming to integrate emerging technologies into state surveillance apparatuses, in the midst of increasing concerns around its legal, ethical, and social ramifications.
To delve deeper, read the full summary here.
A message from our sponsor this week:
All datasets are biased.
This reduces your AI model's accuracy and includes legal risks.
Fairgen's platform solves it all by augmenting your dataset, rebalancing its classes and removing its discriminative patterns.
Increase your revenues by improving model performance, avoid regulatory fines and become a pro-fairness company.
Want to be featured in next week's edition of the AI Ethics Brief? Reach out to masa@montrealethics.ai to learn about the available options!
📰 Article summaries:
IRS Will Require Facial Recognition Scans to Access Your Taxes
What happened: Starting in the summer of 2022, the IRS (the US tax revenue agency) will require biometric identification to access and use online services. A simple username and password won’t suffice. The IRS has hired a firm called ID.me that will be collecting this biometric data (including a selfie) to run face-matching against the user’s uploaded government identification document. The journalists covering this development tried out alternatives to be able to access the website to counter problems in being able to provide the requisite biometric data, but were unsuccessful, both with ID.me and the IRS. There weren’t very clearly documented and easy to follow steps to do so.
Why it matters: For an essential service, especially for an activity that is mandated by law, imposing barriers for those who are not highly tech-literate or those who have other impediments inhibiting their ability to provide biometrics, this shift marks a troublesome trend. While the class of algorithms used by ID.me fall under the category of face-matching (one-to-one matching against a provided sample) as opposed to facial recognition (searching against an entire database of faces), the risks of racial and gender biases in the underlying technology remain, despite the claims made by ID.me of high accuracy rates.
Between the lines: In addition to this, there are also risks of storing tons of immutable and lifelong unique biometric data with a third-party provider even when they are on the vetted list of the US government. It creates a centralized repository that can become a lucrative target for malicious state and non-state cyber-adversaries. While the technology has been introduced (at least as explained by authorities) as a countermeasure against fraud, one must analyze how this might disproportionately impact those who are already marginalized.
India’s tech sector has a caste problem
What happened: Caste in India originates from socio-economic structures of yore whereby caste identity was tied to professions. Work that was deemed to be not as intellectual or clean was relegated to “lower castes” while the opposite end of the spectrum was accorded the moniker of “upper castes.” Unfortunately, that system continues to persist in Indian society today, despite legal measures against anti-discrimination and systems of affirmative action (called “reservations” in India) to uplift those who fall into these castes. The article details harrowing stories of people who face discrimination at work (with a specific focus on the supposedly meritocratic and liberal tech sector) due to their backgrounds, even when they have cleared similar hurdles (and often much more difficult ones) to study at prestigious universities and work at multinational corporations. The results of such practices have forced people to hide parts of their identities to continue to “fit in” with colleagues at places of study and work, and the rest of society around them.
Why it matters: The rise of the tech sector in India was billed as a way of levelling the playing field for everyone by providing an opportunity to ascend social and economic ladders based on merit. Yet, familiar power structures have been replicated with companies choosing to recruit from elite universities and programs taught in English, which are limiting to those from “lower castes” who might live far away from these places and not be fluent in English but otherwise proficient in what the job requires besides linguistic capability in one language. More so, based on the experiences documented in the article, it is evident that the interview processes and the interviewers are heavily biased against people coming from such a background if they discover this fact, throwing in additional barriers and shunning them at the workplace through everyday actions. This engenders a sense of disenfranchisement and has caused many of the folks to leave the sector altogether while others have chosen to ensconce parts of their identities to combat this systemic issue.
Between the lines: Similar to race and gender issues in recruitment and retention in the technology sector in the US, in India, there is an additional layer of castes that hinders DEI (diversity, equity, and inclusion) efforts that are merely exported from other parts of the world to address these issues. There is a need for cultural adaptation and unconscious bias training that needs to be targeted to specific issues that manifest in other cultures. This can be achieved by working with locals who have a deeper understanding of the historical and cultural makeup that gives these issues their specific forms as is the case in India. If the technology sector is upheld as a meritocratic avenue for anyone to ascend the social and economic ladder, this issue must be addressed head-on.
Apple AirTags - 'A perfect tool for stalking'
What happened: AirTags are tracking devices in the shape of tiny 1.26-inch disks whose purpose is to help you find things that you misplace easily, such as keys and other small items. They work via Bluetooth and integrate into the “Find my …” suite of services provided by Apple. But, recent reports in the US indicate that they are being used to stalk people. Since they are so small, they are dropped into a bag, or a car of a person and then their movements can be tracked by the malicious actor who owns that AirTag. Apple has tried to combat this issue by building in some security measures like making the AirTag beep at 60 dB when an iPhone detects that an unauthorized AirTag has been moving around with them for 8-24 hours. They’ve also created an Android app that provides similar functionality.
Why it matters: Yet, there are certain gaps in how this policy has been implemented, the first being the time duration before which this alert is triggered. In a stalking scenario, the first 8 hours might be enough for the purposes of the stalker after which they can unregister and deactivate the AirTag, thus never triggering this alert. Also, given that not everyone owns an iPhone, and the meagre number of downloads (~100k) of the Android app means that there is a vast swathe of people vulnerable to such attacks. Finally, the sound of the alert itself can be muffled quite easily, as one of the interviewees mentions by simply tightly shrouding the AirTag within one’s fist showing the ease of bypassing this security measure. Law enforcement at the moment has limited ability to help those who are targets of stalking attacks using this technology.
Between the lines: The interviewees (even though anecdotal) in the article make a strong case for Apple to desist from selling any more of these devices till they have figured out how to address some of the challenges that exist at the moment. The AirTags are a classic case of product-level rather than system-level thinking which is essential as we deploy ever-more connected and powerful technology all around us. It will help to unearth second-order and unforeseen effects much better and help to nip in the bud some of these issues before they start to affect the lives of people.
📖 From our Living Dictionary:
“Open Source”
👇 Learn more about why it matters in AI Ethics via our Living Dictionary.
🌐 From elsewhere on the web:
Seven AI ethics experts predict 2022’s opportunities and challenges for the field
From developing more human-centric AI to overcoming fragmented approaches to ethical AI development, here’s what experts told us.
Our founder, Abhishek Gupta, shares his view that the biggest advancement will be a formalization of bias audits and the biggest challenge will be reconciling different AI regulations across the world.
💡 In case you missed it:
Hazard Contribution Modes of Machine Learning Components
This paper provides a categorization framework for assessing the safety posture of a system that consists of embedded machine learning components. It additionally ties that in with a safety assurance reasoning scheme that helps to provide justifiable and demonstrable mechanisms for proving the safety of the system.
To delve deeper, read the full summary here.
Take Action:
Keeping in line with our mission to democratize AI ethics literacy, we want to create new opportunities to feature writing from our community members!
If you would like to have your work featured on our website and included in this newsletter to reach thousands of technical and policy leaders in AI ethics, reach out to us!
We are currently looking for research summary writers and will open up more writing opportunities in the coming months for regular contributors.