AI Ethics Brief #99: Analog computers, evolution of war, reward reports, government AI readiness, and more ...
What problems do install-incentivizing apps create on the Google Play Store?
Welcome to another edition of the Montreal AI Ethics Institute’s weekly AI Ethics Brief that will help you keep up with the fast-changing world of AI Ethics! Every week we summarize the best of AI Ethics research and reporting, along with some commentary. More about us at montrealethics.ai/about.
⏰ This week’s Brief is a ~25-minute read.
Support our work through Substack
💖 To keep our content free for everyone, we ask those who can, to support us: become a paying subscriber for as little as $5/month. If you’d prefer to make a one-time donation, visit our donation page. We use this Wikipedia-style tipping model to support the democratization of AI ethics literacy and to ensure we can continue to serve our community.
*NOTE: When you hit the subscribe button, you may end up on the Substack page where if you’re not already signed into Substack, you’ll be asked to enter your email address again. Please do so and you’ll be directed to the page where you can purchase the subscription.
This week’s overview:
✍️ What we’re thinking:
The Next Frontier of AI: Lower Emission Processing Using Analog Computers
The Evolution of War: How AI has Changed Military Weaponry and Technology
🔬 Research summaries:
Beyond the Frontier: Fairness Without Accuracy Loss
Choices, Risks, and Reward Reports: Charting Public Policy for Reinforcement Learning Systems
Government AI Readiness 2021 Index
Labor and Fraud on the Google Play Store: The Case of Install-Incentivizing Apps
📰 Article summaries:
The AI Ethics Boom: 150 Ethical AI Startups and Industry Trends
Where anonymity on Twitter is a matter of life or death
In Human-Centered AI, the Boundaries Between UX and Software Roles Are Evolving
📖 Living Dictionary:
What is an example of anthropomorphism?
🌐 From elsewhere on the web:
AI for Science and Engineering - Canadian Council of Academies (CCA)
💡 ICYMI
Foundations for the future: institution building for the purpose of artificial intelligence governance
But first, our call-to-action this week:
We’d love to hear from you, our readers, on what recent research papers caught your attention. We’re looking for ones that have been published in journals or as a part of conference proceedings.
✍️ What we’re thinking:
The Next Frontier of AI: Lower Emission Processing Using Analog Computers
Deep learning programs running on digital computers may undergo profound disruption in the coming years. Analog computers, previously the unwieldy behemoths covered in glass dials and criss-crossing wires, could be making a comeback as a compute and energy efficient option.
Analog computers started to fall out of fashion from the 1970s onwards with the rise of general-purpose digital technology that allowed for flexible programming. However, now as AI algorithms push the physical limits of digital computers and increasingly intensive computations are required to produce results, there’s a buzz in the tech world about returning to analog.
To delve deeper, read the full article here.
The Evolution of War: How AI has Changed Military Weaponry and Technology
War has historically been thought of as a form of direct armed conflict characterized by violence and physical aggression. Today, however, the introduction of new forms of military weaponry and technology has morphed the traditional definition of war to include more passive approaches. Artificial Intelligence (AI) has been incorporated in warfare through the application of lethal autonomous systems, small arms and light weapons, and three-dimensional (3D) printing. These current uses of AI in military weaponry facilitates conversations regarding the ethical dimensions of the role of AI in war. The militarization and securitization of AI highlight the ever-changing nature and the fears associated with technological advancements.
To delve deeper, read the full article here.
🔬 Research summaries:
Beyond the Frontier: Fairness Without Accuracy Loss
In this paper, the authors propose a new framework for “bias bounties”, a method for auditing automated systems where system users are incentivized to identify and report instances of unfairness. The authors’ proposed framework takes bias bounties a step further, so that feedback not only highlights possible discrimination but also automatically improves the model.
To delve deeper, read the full summary here.
Choices, Risks, and Reward Reports: Charting Public Policy for Reinforcement Learning Systems
This white paper introduces “Reward Reports,” a new form of documentation that could help improve the ability to analyze and monitor AI-based systems over time. Reward Reports could be particularly useful for trade and commerce regulators; standards-setting agencies and departments; and civil society organizations that seek to evaluate unanticipated effects of AI systems.
To delve deeper, read the full summary here.
Government AI Readiness 2021 Index
Artificial Intelligence (AI) represents an important acceleration in the digitalisation of our professional, personal, and economic lives. We can collect, store, and marshal data on scales previously unthought — with applications spanning sectors from healthcare to transport to energy. As more and more public services become digitalised, governments are turning to AI to improve their citizens’ experience and the functioning of these services. To do this effectively, there are myriad factors governments must consider. The Government AI Readiness Index seeks to understand how this potential AI uptake is playing out across the globe.
Since its inception in 2017, we have seen huge growth in the number of countries adopting AI strategies and putting digital transformation at the forefront of their policy aims. Our primary research question for the 2021 Index remains the same: how ready is a given government to implement AI in the delivery of public services to their citizens?
To delve deeper, read the full summary here.
Labor and Fraud on the Google Play Store: The Case of Install-Incentivizing Apps
Install-incentivizing apps violate Google Play Store’s policy by providing monetary incentives to users for inflating the installs of other apps on the platform. But do these incentives actually work? Who do these apps benefit? Our work uncovers how developers incorporate dark patterns in these apps to extract profits and build a market at the expense of users.
To delve deeper, read the full summary here.
📰 Article summaries:
The AI Ethics Boom: 150 Ethical AI Startups and Industry Trends
What happened: The number of ethical AI companies, which are defined in this article as companies that either provide tools to make existing AI systems ethical or build products that remediate elements of bias or unfairness, has grown significantly over the past few years. The key five subcategories of different ethical AI companies discussed in this article are: (1) Data for AI (2) MLOps, Monitoring and Observability (3) AI Audits, Governance, Risk and Compliance (4) Targeted AI Solutions and Technologies (5) Open-Sourced Solutions
Why it matters: The Ethical AI Database project (EAIDB), which was developed in partnership with the Ethical AI Governance Group, aims to create a shift from awareness of the challenges to education of potential solutions by highlighting a nascent ecosystem of ethical AI startups that are prioritizing ethical best practices, transparency, and accountability. The subcategories mentioned in this article is particularly important as it points to trends that will influence investors seeking to assess AI risk and regulators attempting to concretize policy around ethical AI practices.
Between the lines: It will be interesting to track the success of incumbents in the future. Will they incorporate less effective versions of bias-related technology in an effort to keep their platforms viable? Will they lack the expertise and first-mover advantage of these ethical AI startups? On the other hand, they will most likely have a well-established client base to draw from, which may prove to be useful if they decide to tap into new markets.
Where anonymity on Twitter is a matter of life or death
What happened: With the imminent purchase of Twitter by Elon Musk, one of the things he’s pushed for is “authentication” of “real users” to root out the problem of bots and spam users on the platform. The dark side to this approach to solving the issue of problematic users and content on the platform is that it endangers those who rely on “alt” identities to express themselves and build communities online. This is especially true in regimes where they might face oppression, for example, LGBTQIA+ users across Gulf States.
Why it matters: Collecting more demographic data on users, especially when it is on sensitive attributes like sexual orientation which can lead to persecution, this can lead to increased risks even when companies make promises that they will keep the data safe. The rationale, for example articulated by Musk, is that the platform should do such an “authentication” in line with local laws. But, this ignores how local laws in some places might be oppressive towards some groups. The problem also extends to political oppression, for example, clampdown on dissidents in Myanmar. Similar issues of political action occur in Colombia when such online activities are linked to real-world identities, not only harming the individual, but also their loved ones.
Between the lines: Technology ethicists have pointed out that authentication of identities to weed out bots and spam users isn’t necessarily the best approach, rather the platform should be focused on behavioral approaches. This is easier said than done since the arena of social media is an adversarial domain where the malicious actors are constantly adapting their behavior in response to measures taken by platforms. The point that jumps out here is that there are second-order effects to any policy taken towards managing ethical issues, and without deep consideration of those, platforms can shift harms from one set of users to another.
In Human-Centered AI, the Boundaries Between UX and Software Roles Are Evolving
What happened: Traditionally, there have been boundaries between the various technical stakeholders involved in the design, development, and deployment of software solutions, including AI systems. In research work from Stanford, the authors point out that the use of formalized design specifications between UX designers and engineers is on its way out as AI changes what interfaces are and their impacts on users. They propose deferring design specifications to the last stage so that novel approaches like involving engineers in user focus groups, exposing “knobs” in the technical design of AI systems to UX designers, etc. can be used to better address emergent situations. For example, the use of facial recognition to unlock a phone means getting rid of passphrases, patterns, etc. but it comes with design considerations like biases in training datasets that can have a negative impact in being able to achieve end goals which might have been pre-specified in design specifications in the traditional development model.
Why it matters: What the authors call “leaky abstractions” are a great way then to share these novel considerations across disciplinary boundaries and the “jigsaw puzzle pieces” of each group can be better structured and adapted over multiple iterations to finalize the design specifications towards the end of the project development lifecycle rather than heading into it with fixed ideas.
Between the lines: As more complex products and services are designed, especially as they incorporate learning components that evolve their behavior over time, in this case, an AI subsystem, adopting traditional practices to lean more on incorporating feedback across disciplines will be essential in achieving the intended design goals rather than culminating in suboptimal outcomes because the existing development approaches don’t have flexibility to account for these novel needs compared to the more deterministic behavior of traditional software products and services.
📖 From our Living Dictionary:
What is an example for anthropomorphism?
👇 Learn more about why it matters in AI Ethics via our Living Dictionary.
🌐 From elsewhere on the web:
AI for Science and Engineering - Canadian Council of Academies (CCA)
Our founder, Abhishek Gupta, contributed as a technical expert for this report from the CCA on AI for Science and Engineering.
Summary:
Advances in artificial intelligence (AI) have the potential to transform the nature of scientific inquiry and lead to significant innovations in engineering. To date, AI has primarily been used alongside existing design and discovery practices to help researchers analyze or interpret data, e.g., predict the structure of proteins, track insect biodiversity, etc. However, AI will play a much bigger role in design and discovery in the near future ― developing novel scientific hypotheses and experiments and creating new engineering design processes ― all with minimal human involvement.
While AI has the potential to spur innovation and further scientific understanding beyond the limits of the human mind and abilities, it could also exacerbate inequities, perpetuate human biases, and even create new ones. Maximizing the benefits of AI and avoiding its pitfalls, will require addressing real and imminent challenges.
Leaps and Boundaries explores the opportunities, challenges, and implications of deploying AI technologies to enable scientific and engineering research design and discovery in Canada.
The Question:
What are the legal/regulatory, ethical, social, and policy (LESP) challenges associated with deploying AI technologies to enable scientific/engineering research design and discovery in Canada?
The Sponsor:
The National Research Council of Canada (NRC), with support from the Canadian Institute for Advanced Research (CIFAR), Canadian Institutes of Health Research (CIHR), Natural Sciences and Engineering Research Council (NSERC), and Social Sciences and Humanities Research Council (SSHRC).
💡 In case you missed it:
To implement governance efforts for artificial intelligence (AI), new institutions require to be established, both at a national and an international level. This paper outlines a scheme of such institutions and conducts an in-depth investigation of three key components of any future AI governance institution, exploring benefits and associated drawbacks. Thereafter, the paper highlights significant aspects of various institutional roles specifically around questions of institutional purpose, and frames what these could look like in practice, by placing these debates in a European context and proposing different iterations of a European AI Agency. Finally, conclusions and future research directions are proposed.
To delve deeper, read the full article here.
Take Action:
We’d love to hear from you, our readers, on what recent research papers caught your attention. We’re looking for ones that have been published in journals or as a part of conference proceedings.