The AI Ethics Brief #49: AI in finance, data capitalism, Chinese approach to AI, the FOSS future, and more ...

Epistemology can inform data ethics, privacy, and trust on digital platforms

Welcome to another edition of the Montreal AI Ethics Institute’s weekly AI Ethics Brief that will help you keep up with the fast-changing world of AI Ethics! Every week we summarize the best of AI Ethics research and reporting, along with some commentary. More about us at montrealethics.ai/about.

This week’s Brief is a ~12-minute read.


Support our work through Substack

💖 To keep our content free for everyone, we ask those who can, to support us: become a paying subscriber for as little as $5/month. If you’d prefer to make a one-time donation, visit our donation page. We use this Wikipedia-style tipping model to support the democratization of AI ethics literacy and to ensure we can continue to serve our community.

*NOTE: When you hit the subscribe button, you may end up on the Substack page where if you’re not already signed into Substack, you’ll be asked to enter your email address again. Please do so and you’ll be directed to the page where you can purchase the subscription.


This week’s overview:

✍️ What we’re thinking:

  • From the Founder’s Desk: Why free and open source software (FOSS) should be the future of Responsible AI

🔬 Research summaries:

  • Survey of EU Ethical Guidelines for Commercial AI: Case Studies in Financial Services

  • Data Capitalism and the User: An Exploration of Privacy Cynicism in Germany

  • The Epistemological View: Data Ethics, Privacy & Trust on Digital Platform

  • The Chinese Approach to AI: An Analysis of Policy, Ethics, and Regulation

📰 Article summaries:

  • Artificial intelligence research continues to grow as China overtakes US in AI journal citations (The Verge)

  • As China rises, the US builds towards a bigger role in AI (Wired)

  • Maths nerds, get ready: an AI is about to write its own proofs (Wired UK)

  • Climate fight is “undermined by social media’s toxic report” (The Guardian)


But first, our call-to-action this week:

Register for our next meetup: AI Ethics in the APAC Region

We’re partnering with Women in AI and the University of New South Wales (UNSW) in Australia to host a discussion about democratising AI, disinformation, content moderation, AI in the APAC region and the region's responsibility in the AI debate.

📅 April 14th (Wednesday)
🕛 4:30PM–6:00PM EST
🎫 Get free tickets

Register now


✍️ What we’re thinking:

From the Founder’s Desk:

Why free and open source software (FOSS) should be the future of Responsible AI by Abhishek Gupta

There are several benefits in supporting the open-source model. Open source software is typically more transparent, cost-effective, robust, and allows for quick improvements through open-source community collaborations. We can support this model both through our attention and financial resources, which will create better overall outcomes for the field of Responsible AI.

Finally, responsible AI is meant for the public good, and the public good cannot and should not be created without involving the public actively. FOSS offers a pathway to do that effectively.

To delve deeper, read the full article here.


🔬 Research summaries:

Survey of EU Ethical Guidelines for Commercial AI: Case Studies in Financial Services

This paper evaluates the efficacy of current EU ethical guidelines for commercial AI—specifically the “European framework on ethical aspects of artificial intelligence, robotics and related technologies”, and provides a regulatory recommendation where there are gaps. We delve into 3 use-cases in the financial services space to highlight the spectrum of ethical risks that arise from each implementation.

To delve deeper, read the full summary here.

Data Capitalism and the User: An Exploration of Privacy Cynicism in Germany

This paper dives deeply into the many dimensions of privacy cynicism in this study of Internet users in Germany. The researchers examine attitudes of uncertainty, powerlessness, and resignation towards data handling by Internet companies and find that people do not consider privacy protections to be entirely futile.

To delve deeper, read the full summary here.

The Epistemological View: Data Ethics, Privacy & Trust on Digital Platform

Understanding the implications of employing data ethics in the design and practice of algorithms is a mechanism to tackle privacy issues. This paper addresses privacy or a lack thereof as a breach of trust for consumers. The authors draw on how data ethics can be applied and understood depending on who the application is used for enlists and can build different variations of trust.

To delve deeper, read the full summary here.

The Chinese Approach to AI: An Analysis of Policy, Ethics, and Regulation

This paper explores current China’s current AI policies, their future plans, and ethical standards they’re working on. The authors zoom in on China’s country-wide strategic effort, i.e. the ‘New Generation Artificial Intelligence Development Plan’ (AIDP). The strategic aims of the plan can be divided up into 3 main goals: international competition, economic development, and social governance.

To delve deeper, read the full summary here.


📰 Article summaries:

Artificial intelligence research continues to grow as China overtakes US in AI journal citations (The Verge)

  1. What happened: Capturing some of the major trends in the AI domain in 2020, the article describes some of the findings from the Stanford HAI AI Index report which provides a lot of benchmarks and statistics describing the configuration of the AI ecosystem. One trend that is interesting to note is the meteoric rise of China in journal citations. Also noteworthy is that the field of AI ethics still poses challenges to the wider community and requires a lot more work before we get to a place where responsible AI is more commonly practised. 

  2. Why it matters: The rise of China in the academic world is something that is now popping up on the radar of the rest of the world as they start to publish at more western hemisphere conferences. China already has a long history of publishing technical work but perhaps language has posed a barrier in more free exchange. 

  3. Between the lines: What will be interesting to observe over the next few years is the extent to which some other nations like India start to figure more prominently in these reports given the tremendous technical talent. It would also be great to see how regions outside of the typical AI powerhouses like USA, Canada, UK, China showcase local innovation.

As China rises, the US builds towards a bigger role in AI (Wired)

  1. What happened: The article talks about some of the key trends from the recently published report by the National Security Commission on AI (MAIEI had contributed to the report as well on the ethics sections). Key insights include boosting the level of investments in non-defense AI applications, leveraging the domestic semiconductor manufacturing expertise to become more competitive with China, and heightening the involvement of the government in R&D efforts in AI to boost global competitiveness. 

  2. Why it matters: Continuing on the trend of an AI arms race (which is an unfortunate adversarial dynamic), nations now have even more incentives to shore up AI capabilities as geopolitics might harm their competitiveness in harnessing the power of AI, especially when it is in national security interests. The fragmentation of the AI landscape will lead to more problems than it solves. 

  3. Between the lines: More cross-collaboration and intercultural exchanges will be necessary to prevent escalations in the field of AI, for example in the use of AI in warfighting. A more accurate understanding of the real capabilities and limitations of AI systems will help nations be proportionate in their responses and perhaps also more harmonious towards each other as AI deployment becomes more widespread.

Maths nerds, get ready: an AI is about to write its own proofs (Wired UK)

  1. What happened: Written by Marcus du Sautoy who has written a book on the subject of how machines might start to write, paint, and think, this article talks about how AI systems might be closer to generating mathematical proofs than before. And yes, computers have already been used in the past to aid with proofs, notably in 1976 with the four colour theorem, but Sautoy delineates the role of the mathematician as being one where they can provide a narrative tying together different ideas yielding interesting results rather than just a dry linking together of various equations in a logical fashion. Sautoy sees an exciting future for mathematics and proofs with computers serving as trusty entities helping along the journey of mathematical discovery.  

  2. Why it matters: Proof writing serves as a foundation for developing more advanced theories and building on existing knowledge. In a fast-growing field where discovery of work from other subdomains becomes harder, the utilization of AI systems to link together hidden pieces of knowledge across a large body of literature will only help to accelerate the progress in the field of mathematics. 

  3. Between the lines: The framing of the entry of machines into the field is an important one because of the impacts they’ll have on people’s lives and livelihoods. In a framing where we position AI systems as supplementing our capabilities rather than supplanting them, we benefit from augmenting our impact rather than worrying about perhaps unrealizable scenarios that may never come to pass.

Climate fight is “undermined by social media’s toxic report” (The Guardian)

  1. What happened: The power of disinformation on social media has the potential to impact fundamental tenets of a well-functioning society. In a report from Sweden, researchers found that especially for long-run issues like climate change, the harm from polarization on social media negatively affects the efforts of scientists and those trying to move us towards reversing the harm from climate change. 

  2. Why it matters: While short-run issues manifest themselves in more direct ways when we have disinformation affecting the information ecosystem, say for example US Presidential elections, the long-run effects are less visible and hence more insidious. Climate change is already a hard challenge and burdened with polarization online, the fight becomes steeper on an already uphill battle. 

  3. Between the lines: The ability of people to tell fact from fiction online is going to be an increasingly important skill. Perhaps we should consider these skills in schools so that the generations growing up in a disinformation ecosystem can learn to navigate it better.


From our Living Dictionary:

‘Techno-solutionism’

Techno-solutionism is the mistaken idea that all our problems can be solved using technological solutions.

👇 Learn why it’s a problem via our Living Dictionary.

Explore the Living Dictionary!

From elsewhere on the web:

AI: Ghost workers demand to be seen and heard (BBC News)

Alexandrine Royer from our team wrote an op-ed about the urgent need to regulate global ghost work for Brookings, which was cited in this BBC News article.

Civic Competence Against the Invisible Hand of AI (TEDx)

Abhishek Gupta (our founder) gave a TEDx talk about the role that civic competence will play in shaping our AI-enabled future.

Digital Privacy, Security and Society: Today's Challenges and Opportunities (Canadian International Council)

Professor Marianna Ganapini from our team recently spoke to the Canadian International Council about cybersecurity incidents, AI in social media, and a shift in the public opinion on digital privacy and security.

Guest post:

If you’ve got an informed opinion on the impact of AI on society, consider writing a guest post for our community — just send your pitch to support@montrealethics.ai. You can pitch us an idea before you write, or a completed draft.

In case you missed it:

Using Multimodal Sensing to Improve Awareness in Human-AI Interaction

With increasing capabilities of AI systems, and established research that demonstrates how human-machine combinations operate better than each in isolation, this paper presents a timely discussion on how we can craft better coordination between human and machine agents with the aim of arriving at the best possible understanding between them. This will enhance trust levels between the agents and it starts with having effective communication. This paper by Joshua Newn discusses how framing this from a human-computer interaction (HCI) approach will lead to achieving this goal. This is framed with intention-, context-, and cognition-awareness being the critical elements which would be responsible for the success of effective communication between human and machine agents.

To delve deeper, read the full summary here.


Take Action:

Sign up for TheSequence.AI

It's hard to follow all the developments in machine learning. One way to keep up is signing up for TheSequence.AI, a weekly ML newsletter with over 80K readers.

Events:

AI Ethics in the APAC Region

We’re partnering with Women in AI and the University of New South Wales (UNSW) in Australia to host a discussion about democratising AI, disinformation, content moderation, AI in the APAC region and the region's responsibility in the AI debate.

📅 April 14th (Wednesday)
🕛 4:30PM–6:00PM EST
🎫 Get free tickets