AI Ethics Brief #63: AI ethics team at your organization, encoded values in ML, sandbox for AI regulation, and more ...
Take a look at our top 10 takeaways on ethics in conversational AI
Welcome to another edition of the Montreal AI Ethics Institute’s weekly AI Ethics Brief that will help you keep up with the fast-changing world of AI Ethics! Every week we summarize the best of AI Ethics research and reporting, along with some commentary. More about us at montrealethics.ai/about.
⏰ This week’s Brief is a ~11-minute read.
Support our work through Substack
💖 To keep our content free for everyone, we ask those who can, to support us: become a paying subscriber for as little as $5/month. If you’d prefer to make a one-time donation, visit our donation page. We use this Wikipedia-style tipping model to support the democratization of AI ethics literacy and to ensure we can continue to serve our community.
*NOTE: When you hit the subscribe button, you may end up on the Substack page where if you’re not already signed into Substack, you’ll be asked to enter your email address again. Please do so and you’ll be directed to the page where you can purchase the subscription.
This week’s overview:
✍️ What we’re thinking:
How to build an AI ethics team at your organization?
🔬 Research summaries:
The Values Encoded in Machine Learning Research
📅 Event summaries:
Top 10 Takeaways from our Conversation with Salesforce about Conversational AI
📰 Article summaries:
DeepMind AGI paper adds urgency to ethical AI
Should Families’ Surveillance Cameras Be Allowed in Nursing Homes?
Judge Throws Out 2 Antitrust Cases Against Facebook
To regulate AI, try playing in a sandbox
But first, our call-to-action this week:
State of AI Ethics Report - Volume 4
In anticipation of the release of the next State of AI Ethics Report at the end of this month, we invite you to take a look at the previous editions here and the most recent edition here that opens with AI and the Face: A Historian’s View — a long-form piece by Edward Higgs (Professor of History at the University of Essex) about the unscientific history of facial analysis, and how AI might be repeating some of those mistakes at scale.
To delve deeper, read the full report here.
✍️ What we’re thinking:
From the Founder’s Desk:
How to build an AI ethics team at your organization?
So you're working on AI systems and are interested in Responsible AI? Have you run into challenges in making this a reality? Many articles mention a transition from principles to practice but end up falling flat when you try to implement them in practice. So what's missing? Here are some ideas that I think will help you take the first step in making it a reality.
To delve deeper, read the full article here.
🔬 Research summaries:
The Values Encoded in Machine Learning Research
Machine learning is often portrayed as a value-neutral endeavor; even when that is not the exact position taken, it is implicit in how the research is carried out and how the results are communicated. This paper undertakes a qualitative analysis of the top 100 most cited papers from NeurIPS and ICML to uncover some of the most prominent values these papers espouse and how they shape the path forward.
To delve deeper, read the full summary here.
📰 Article summaries:
DeepMind AGI paper adds urgency to ethical AI
What happened: Reinforcement learning (RL) is often much less discussed than other forms of machine learning. Usually, discussions around it in terms of its achievements in beating humans in games like Go. Adopting a different approach by creating a reward feedback loop with the environment, RL could be a pathway to achieving artificial general intelligence (AGI) with an accelerated timeline. The recent work from DeepMind has made some researchers revise their estimates of when AGI might become a reality, if at all.
Why it matters: The most frequently discussed ethical aspects in the context of RL include value alignment, reward hacking, safe exploration, and avoiding adverse side effects. In the current ecosystem of AI ethics research, these are severely under-discussed aspects, with most of the focus on issues like fairness and privacy.
Between the lines: Deployed ML systems will be a mixture of different approaches, and keeping an eye on developments like these and the implications they will have on ethics, safety, and inclusion is an integral part of working in the field. We need to broaden the scope of the discussion of concerns as they arise and relate to different ML methodologies so that our proposed approaches don’t ignore essential facets of deployed ML systems.
Should Families’ Surveillance Cameras Be Allowed in Nursing Homes?
What happened: A surveillance camera installed in a nursing home captured a death showing gruesome details with the victim crying out for help and the nursing staff idling and, in some cases laughing at the patient. It sparked a massive debate on whether it was legal to mount such cameras in nursing homes. Arguably there are privacy concerns, and it showcases a distrust in the staff at the nursing homes, while others have argued that it is a way to hold them accountable. As is usually the case with technological solutions to sociological problems, such a solution fails to address the underlying issues of under-compensated and overworked nursing staff, amongst other problems with the healthcare system.
Why it matters: Particularly as we start to automate the video processing captured from these cameras, the issues raised in this case will become even more pertinent to the overall discussion of surveillance. As the case made its way through the legislative process, the courts settled on deeming it ok to have visible cameras so that the people being surveilled are aware of the presence of a camera. In contrast, the use of hidden cameras was prohibited.
Between the lines: We see a collision of technology and society in ways that we couldn’t have anticipated. Issues such as the use of cameras (which have become cheaper to deploy) coupled with the use of AI to automatically process the video feeds can help in unburdening the staff from having to constantly monitor patients, such as automatically detecting if a patient has fallen. On the other hand, it begins to normalize automated surveillance as an accepted part of our society which will have much more profound effects in the long term.
Judge Throws Out 2 Antitrust Cases Against Facebook
What happened: In a monumental setback to efforts to rein in BigTech, a judge dismissed antitrust cases against Facebook because more evidence was needed and that the regulators filed their lawsuits late, given that the acquisitions that they refer to happened 6 and 8 years ago (Whatsapp and Instagram respectively). While the regulators have 30 days to file again, they face a stiff challenge as the courts have narrowed their interpretation of antitrust law over the last few years. The courts also took the position that if a monopoly emerged from Facebook’s acquisitions, then they should have acted years ago rather than now.
Why it matters: As principles for technology use promulgate, this is a reminder that what is enshrined in regulations and law is ultimately what holds a significant amount of sway on whether we can generate the socially friendly outcomes that we desire. The call from senators and lawmakers arguing for broadening the scope of antitrust regulations is a step in the right direction, especially as they are applied to Internet companies who might not have the same hallmarks of traditional monopolies, for example, pricing where a lot of these services are offered for free to the users.
Between the lines: The call for breaking out Instagram and Whatsapp from Facebook addresses only a tiny part of the more significant problem. Such monopolies are bound to arise again and again due to the network effects and structure of social media networks today. This will help to stem the tide with the current crop of companies but it does little to change what will inevitably happen again in the future. A more systematic overhaul of the regulatory ecosystem is perhaps what is needed.
To regulate AI, try playing in a sandbox
What happened: Sandboxes have been proposed as a part of EU regulations on getting AI systems to comply with requirements from the GDPR among others. The article details some of the engagements with companies that Norway has embarked on to figure out these challenges. In particular, complying with requirements like privacy-by-design and reporting on that compliance in an understandable manner requires cooperation between legal and technology stakeholders. The initiative in Norway is helping to facilitate that. Similar efforts are also underway in the US with Broussard from NYU and O’Neill from ORCAA attempting to create sandboxes that can help unearth concrete practices that will help address regulatory needs.
Why it matters: This movement towards trying to figure out tangible solutions through trial-and-error is a welcome change compared to the incessant merry-go-round that we have today talking about problems and regulations in the abstract with untested solutions being put forward without empirical evidence to back up how they might work and in which contexts.
Between the lines: It will be interesting to see the lessons learned both from the Norway experiment and some of the ones being run in the US. The regulatory ecosystem is vastly different between the two regions and I foresee that the approaches that emerge from sandbox efforts in both places will be quite different. But, there should be some common threads from both experiments that will help practitioners put these regulatory requirements into practice rather than just running in circles trying to make their systems compliant.
📅 Event summaries:
Top 10 Takeaways from our Conversation with Salesforce about Conversational AI
Would you relate to a chatbot or voice assistant more if they were female? Would such conversational AI help you feel less lonely? Our event summary of our collaboration with Salesforce sets out to discuss just that.
To delve deeper, read the full summary here.
From our Living Dictionary:
‘Ethics washing’
👇 Learn more about why it matters in AI Ethics via our Living Dictionary.
From elsewhere on the web:
AI Has An Emission Problem: Is It Fixable?
In this article that talks about the large amount of emissions in the use of large-scale models in AI, the SECure framework authored by our founder Abhishek Gupta and collaborators is featured that offers a practical way to address some of these emerging challenges.
To delve deeper, read the full article here.
In case you missed it:
To Be or Not to Be Algorithm Aware: A Question of a New Digital Divide?
Understanding how algorithms shape our experiences is arguably a prerequisite for an effective digital life. In this paper, Gran, Booth, and Bucher determine whether different degrees of algorithm awareness among internet users in Norway correspond to “a new reinforced digital divide.”
To delve deeper, read the full report here.
Take Action:
State of AI Ethics Report - Volume 4
In anticipation of the release of the next State of AI Ethics Report at the end of this month, we invite you to take a look at the previous editions here and the most recent edition here that opens with AI and the Face: A Historian’s View — a long-form piece by Edward Higgs (Professor of History at the University of Essex) about the unscientific history of facial analysis, and how AI might be repeating some of those mistakes at scale.
To delve deeper, read the full report here.