AI Ethics #27: The Guest Edition
We've rounded up 7 of our top guest posts and lined them up in one place for your reading pleasure!
Welcome to the 27th edition of our weekly newsletter that will help you navigate the fast-changing world of AI Ethics! Usually we dive into research papers and articles, but this time we’ve got a special guest edition for you, rounding up our top 7 guest contributions from previous newsletters.
If someone has forwarded this to you and you want to get one delivered to you every week, you can subscribe to receive this newsletter by clicking below:
Guest contributions:
AI Economist: Reinforcement Learning is the Future for Equitable Economic Policy by Richard Socher, Chief Scientist and EVP, Salesforce, and Head of Salesforce Research; and Stephan Zheng, Senior Research Scientist, Salesforce Research
Long before pandemic-related lockdowns, economic inequality has been one of the most significant issues affecting humanity. A report from the United Nations in January 2020 found that inequality is rising in most of the developed world. With so much to lose or gain, it’s no surprise that bias can influence policymaking…often to the detriment of those who need the most help. This underscores the potential that AI can have for good, and why it’s important to develop tools and solutions that are simulation- and data-driven to yield more equitable policies.
The new AI Economist model from Salesforce Research is designed to address this kind of equality by identifying an optimal tax policy. By using a two-level reinforcement learning (RL) framework, training both (1) AI agents and (2) tax policies, it simulates and helps identify dynamic tax policies that best accomplish a given objective. This RL framework is model-free in that it uses zero prior world knowledge or modeling assumptions, and learns from observable data alone.
Why was your job application rejected: Bias in Recruitment Algorithms? (Part 1) by Merve Hickok (SHRM-SCP), founder of AIethicist.org
Humans are biased. The algorithms they develop and the data they use can be too, but what does that mean to you as a job applicant coming out of school or looking to move up to the next step in the career ladder, or considering a change in roles/industry?
In this two-part article, we walk through each stage of recruitment (targeting, sourcing/matching, screening, assessment, and social background checks) and explore how some of the AI-powered commercial software used in each stage can lead to unfair and biased decisions.
Why was your job application rejected: Bias in Recruitment Algorithms? (Part 2) by Merve Hickok (SHRM-SCP), founder of AIethicist.org
So let’s assume your application was one of the ones ranked high in the matching and sourcing platform, and the recruiter clicked your name to process you into the next stage where you are screened against the company’s preferred criteria. Whether it is through hard-coded questions and filters built into the system or machine learning algorithms which make decisions, the screening process helps to reduce the number of applications as it goes through your resume and picks up the skills and information (degree, GPA, years of experience, fluency in spoken or technical languages, etc).
Whatever the software was able to read (or parse) from your CV, the data points are then matched with the desired points for the specific role. The candidates who have matching points may then again be ranked according to the degree or percentage of the match. However, the bigger bias issues in this stage have to do with data used to build the algorithm, and the kind of model making the predictions.
Is the Human Being Lost in the Hiring Process? by Connor Wright, research intern at FairlyAI — a software-as-a-service AI audit platform which provides AI quality assurance for automated decision-making systems with focus on AI equality.
The hiring process is becoming ever more automated, with companies implementing algorithms to conduct keyword searches as well as video analysis. So, where is humanity’s place in this ever-more automated system? This article answers the question by drawing on both federal and state efforts to preserve its position, with individualised city efforts also featuring prominently in this effort. It then considers how businesses can navigate this field and ensure they too maintain humanity’s place in the process. Despite control being ever-more ceded to automation, state and federal efforts have ensured that some of this has gone back to the candidates involved.
Can We Teach AI Robots How to Be Human? by Jen Brige, a blogger who believes in our ability as a species to create a better future through advancements in technology
Artificial intelligence and robots have gotten steadily more advanced in recent years. It’s been a long road to that point, but as Wired’s history of robotics puts it, the technology “seems to be reaching an inflection point” at which processing power and AI can produce truly smart machines. This is something most people who are interested in the topic have come to understand. What comes next though might be the question of how human we can make modern robots — and whether we really need or want to.
Algorithms Deciding the Future of Legal Decisions by Brooke Criswell, who’s pursuing a PhD. in media psychology
Artificial intelligence (AI) is everywhere and in every industry. Technological advances can enhance people’s everyday lives and produce some amazing outcomes at rapid speed. However, AI also has the potential to be biased and harm individuals, depending on the algorithms’ usage and design. Many industries including the judicial system are now incorporating AI into their decision making. The claim is that using machines takes the biases that humans have, out of the equation, and so the decisions have to be objective. However, it has been shown time and time again that it is not true (O’Neil, 2017). This paper explores how artificial intelligence is being used in the courtroom for predicting criminal behavior, length of sentences, and determining who is likely to recommit a crime. Data scientists are also being hired within the judicial system to manage these machines; however, media psychologists also need to be involved in the process.
Why We Need to Audit Government AI by Alayna Kennedy, Public Sector Consultant and AI Ethics Researcher at IBM
Artificial Intelligence (AI) technology has exploded in popularity over the last 10 years, with each wave of technical breakthroughs ushering more and more speculation about the potential impacts of AI on our society, businesses, and governments. First, the Big Data revolution promised to forever change the way we understood analytics, then deep learning promised human-level AI performance, and today AI offers huge business returns to investors. AI has long been a buzzword in businesses across the world, but for many government agencies and larger organizations, earlier applications of commercial AI proved to be overhyped and underwhelming. Only now are large-scale organizations, including governments, beginning to implement AI technology at scale, as the technology has moved from the research lab to the office.
If you’ve got an informed opinion on the impact of AI on society, consider writing a guest post for our community — just send your pitch to support@montrealethics.ai. You can pitch us an idea before you write, or a completed draft.
Signing off for this week, we look forward to it again in a week! If you enjoyed this and know someone else that can benefit from this newsletter, please share it with them!
If you have feedback for this newsletter or think there’s something interesting we missed, email us at support@montrealethics.ai