AI Ethics #2 : Toxic feedback loops, AI regulations, ethics checklists, disinformation, and more ...
Our second weekly edition covering research and news in the world of AI Ethics
Welcome to the second edition of our weekly newsletter that will help you navigate the fast changing world of AI Ethics! Every week we dive into research papers that caught our eye, sharing a summary of those with you and presenting our thoughts on how it links with other work in the research landscape. We also share brief thoughts on interesting articles and developments in the field. More about us on: https://montrealethics.ai/about/
If someone has forwarded this to you and you want to get one delivered to you every week, you can subscribe to receive this newsletter by clicking below
It has been a rough week around the world with the spread of COVID-19 and it being declared a pandemic by the WHO. We all have to do our parts in helping to curb the spread of this disease and most importantly, relying on authentic sources of news and information that can help us be prepared as best as we can. We thought it appropriate to talk about disinformation this week along with a host of other areas of discussion. And now onto the content …
Research:
Let's look at some highlights of research papers that caught our attention at MAIEI:
What’s Next for AI Ethics, Policy, and Governance? A Global Overview by Daniel Schiff, Justin Biddle, Jason Borenstein and Kelly Laas
With a plethora of ethics documents, the landscape has become very crowded with a significant degree of overlap amongst various documents. There are slight variations in the content and it can become hard to determine which of them might be the most applicable to your scenario of AI use. The paper attempts to discern underlying motivations for creating these documents, the composition of the people behind them and what factors might determine the success of the documents in achieving their goals. It provides a useful typology to partition the landscape so consumers of the documents can make better choices as to which ones are aligned with what they need. The largest fraction of the documents have come from governmental bodies, followed by private organizations and then NGOs. While there is a certain degree of homogeneity in the documents from Western and Global North countries, documents coming from NGOs exhibited the highest diversity in terms of scope, detail, and the participatory process of creation. Most importantly, the issues tackled were a function of the geographical origin which left some gaps in areas that might bring social benefits in the Global South. Motivations undergirding these pieces of work spanned from social and national signalling to attempts to pre-empt stringent regulations from coming into force. The eventual success of the documents rested ultimately on the specificity, enforceability and monitoring mechanisms proposed and how closely integrated they were with the law and policy making agencies. While high level principles could lead to more specific recommendations, there was an immediate benefit in terms of actionability of documents that had concrete technical and policy recommendations.
To delve deeper, read our full summary here.
The Toxic Potential of YouTube's Feedback Loop by Guillaume Chaslot
This was a talk at the CADE Tech Policy Workshop : New Challenges for Regulation in late 2019 by Guillaume Chaslot who worked previously at YouTube and had first hand experience with the design of the algorithms driving the platform and its unintended negative consequences. MAIEI included this as a part of the required readings for the session we hosted on disinformation and how it spreads. The talk goes into details of Guillaume’s experience working on the algorithms that were optimized for watch times and how that focus on a single metric led to a larger issue of gaming of the platform such that content that was extreme and polarizing began to hijack the content recommendations provided to users. He also goes on to talk about how bringing up these issues internally didn't lead to much action because of a misalignment between business incentives and societal welfare. This led him to create AlgoTransparency which provides a tool to highlight filter bubbles and radical content on platforms like YouTube. He concludes the talk by sharing some potential ideas to combat harm that might arise from increasingly capable AI systems.
To delve deeper, read our full summary here.
The Wrong Kind of AI? Artificial Intelligence and the Future of Labor Demand by Daron Acemoglu and Pascual Restrepo
Photo by Lenny Kuhne on Unsplash
With terrifying headlines and figures perpetrated in popular media on how automation is going to be taking away all our jobs and the suggested alternatives range from submitting meekly to our robot overlords to everyone becoming an artist, the truth lies somewhere in the middle and this work from two economists provides some grounding for these discussions in terms of how to approach this dilemma in trying to develop the right kind of AI systems that can create social and economic benefits. It lays bare how to differentiate the automation wave that swept during the industrial revolution and the one that's sweeping through now. The key difference being in how simultaneous technological advances previously created adjacent tasks that were suited for humans and allowed both productivity gains and wage growth. The wave today has been more squarely focused on the labor replacement aspects, undermining the potential societal gains that can be had by creating the “right” kind of AI systems that encourage the use of skills that humans are better at and combining them effectively with skills that machines have mastered. An additional lens of market failures is applied from an economic theory standpoint that emphasizes how the incentives in the current ecosystem are such that they encourage the development of the “wrong” kind of AI and make it hard to switch between paradigms that can have greater social benefits for society at large. While there are no definite answers in how we might resolve this discord, the paper does provide a framework within which to reasonably start addressing the automation and job loss debate.
To delve deeper, read our full summary here.
Articles:
Let’s look at highlights of some recent articles that we found interesting at MAIEI:
Microsoft researchers create AI ethics checklist with ML practitioners from a dozen tech companies
The article summarizes recent work from several Microsoft researchers on the subject of making AI ethics checklists that are effective. One of the most common problems identified relate to the lack of practical applicability of AI ethics principles which sound great and comprehensive in the abstract but do very little to aid engineers and practitioners from applying them in their day to day work. The work was done by interviewing several practitioners and advocating for a co-design process that brings in intelligence on how to make these tools effective from other disciplines like healthcare and aviation. One of the things emerging from the interviews is that often engineers are few and far between in raising concerns and there's a lack of top-down sync in enforcing these across the company. Additionally, there might be social costs to bringing up issues which discourages engineers from implementing such measures. Creating checklists that reduce friction and fit well into existing workflows will be key in their uptake. At MAIEI we've done work on a checklist as well for applying ethics, safety and inclusivity principles to AI in mental health applications.
Who’s Allowed to Track My Kids Online?
Under the Children's Online Privacy Protection Act, the FTC levied its largest fine yet of $170m on YouTube last year for failing to meet requirements of limiting personal data collection for children under the age of 13. Yet, as many advocates of youth privacy point out, the fines, though they appear to be large, don't do enough to deter such personal data collection. They advocate for a stronger version of the Act while requiring more stringent enforcement from the FTC which has been criticized for slow responses and a lack of sufficient resources. While the current Act requires parental consent for children below 13 to be able to utilize a service that might collect personal data, there is no verification performed on the self-declared age provided at the time of sign up which weakens the efficacy of this requirement. Secondly, the sharp threshold of 13 years old immediately thrusts children into an adult world once they cross that age and some people are advocating for a more graduated approach to the application of privacy laws.
Chinese citizens are racing against censors to preserve coronavirus memories on GitHub
Given that such a large part of the news cycle is dominated by the coronavirus, we tend to forget that there might be censors at work that are systematically suppressing information in an attempt to diminish the seriousness of the situation. Some people are calling GitHub the last piece of free land in China and have utilized access to it to document news stories and people's first hand experiences in fighting the virus before they are scrubbed from local platforms like WeChat and Weibo. They hope that such documentation efforts will not only shed light on the reality and on the ground situation as it unfolds but also give everyone a voice and hopefully provide data to others who could use it to track the movement of the virus across the country. Such times of crisis bring out creativity and this attempt highlights our ability as a species to thrive even in a severely hostile environment.
A Crisis of Ethics in Technology Innovation
Building on theory from management studies by Christensen et al. the authors of this article dive into how leaders of tech organizations, especially upstarts that are rapid in the disruption of incumbents should approach the accompanying responsibilities that come with a push into displacing existing paradigms of how an industry works. When there is a decoupling of different parts of the value chain in how a service is delivered, often the associated protections that apply to the entire pipeline fall by the wayside because of distancing from the end user and a diffusion of responsibility across multiple stakeholders in the value chain. While end users driven innovation will seek to reinforce such models, regulations and protections are never at the top of such demands and they create a burden on the consumers once they realize that things can go wrong and negatively affect them. The authors advocate for the leaders of the companies to proactively employ a systems thinking approach to identify different parts that they are disrupting, how that might affect users, what would happen if they become the dominant player in the industry and then apply lessons learned from such an exercise to pre-emptively design safeguards into the system to mitigate unintended consequences.
Why lifelong learning is the international passport to success
In a world where increasing automation of cognitive labor due to AI-enabled systems will dramatically change the future of labor, it is now more important than ever that we start to move away from a traditional mindset when it comes to education. While universities in the previous century rightly provided a great value in preparing students for jobs, as jobs being bundle of tasks and those tasks rapidly changing with some being automated, we need to focus more on training students for things that will take much longer to automate, for example working with other humans, creative and critical thinking and driving innovation based on insights and aggregating knowledge across a diversity of fields. Lifelong learning serves as a useful model that can impart some of these skills by breaking up education into modules that can be taken on an “at will” basis allowing people to continuously update their skills as the landscape changes. Students will go in and out of universities over many years which will bring a diversity of experiences to the student body, encouraging a more close alignment with actual skills as needed in the market. While this will pose significant challenges to the university system, innovations like online learning and certifications based on replenishment of skills like in medicine could overcome some of those challenges for the education ecosystem.
This Is The Year Of AI Regulations
Given the large public awareness and momentum that built up around the ethics, safety and inclusion issues in AI, we will certainly see a lot more concrete actions around this in 2020. The article gives a few examples of Congressional Hearings on these topics and advocates for the industry to come up with some standards and definitions to aid the development of meaningful regulations. Currently, there isn't a consensus on these definitions and it leads to varying approaches addressing the issues at different levels of granularity and angles. What this does is create a patchwork of incoherent regulations across domains and geographies that will ultimately leave gaps in effectively mitigating potential harms from AI systems that can span beyond international borders. While there are efforts underway to create maps of all the different attempts of defining principle sets, we need a more coordinated approach to bring forth regulations that will ultimately protect consumer safety.
From the archives:
Here’s an article from our blogs that we think is worth another look:
Probing Networked Agency: Where is the Locus of Moral Responsibility?
This paper by Audrey Balogh from the Philosophy Department at McGill University problematizes the case for autonomous robots as loci of moral responsibility in circuits of networked agency, namely by troubling an analogy drawn between canine and machine in John P. Sullins’ paper ‘When Is a Robot a Moral Agent?”. It will also explore the pragmatic implications of affording these machines a morally responsible designation in contexts of law and policy.
Guest contributions:
We invite researchers and practitioners working in different domains studying the impacts of AI-enabled systems to share their work with the larger AI ethics community, here’s this week’s featured post:
In this week’s guest post, Jimmy Huang (Subject Matter Expert for Data Pooling at TickSmith) explains the origin story of data pooling in the banking sector, and its ethical implications in an increasingly AI-driven world. Read it here: https://montrealethics.ai/data-pooling-in-capital-markets-and-its-implications/
Complement this with our recent piece in the RE-WORK blog here: http://blog.re-work.co/ethics-in-ai-finance-industry/
If you’re working on something interesting and would like to share that with our community, please email us at support@montrealethics.ai
Events:
As a part of our public competence building efforts, we host events frequently spanning different subjects as it relates to building responsible AI systems, you can see a complete list here: https://montrealethics.ai/meetup
Given the advice from various health agencies, we’re avoiding physical events to curb the spread of COVID-19. Stay tuned for updates!
Signing off for this week, we look forward to it again in a week! If you enjoyed this and know someone else that can benefit from this newsletter, please share it with them!
If you have feedback for this newsletter or think there is an interesting piece of research, development or event that we missed, please feel free to email us at support@montrealethics.ai
If someone has forwarded this to you and you like what you read, you can subscribe to receive this weekly newsletter by clicking below