20 Comments
Jun 7, 2023Liked by Montreal AI Ethics Institute

Your readers may be interested in the following paper that discusses the impact of generative AI on the justice system. See https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4460184. Thanks.

Expand full comment
Aug 2, 2023Liked by Montreal AI Ethics Institute

Q: Are there any good resources that distill the system-level view of LLM-based AI systems that you are aware of?

A: Useful source is this from Ada Lovelace Institute see the 'foundation model supply chain' graphic: https://www.adalovelaceinstitute.org/resource/foundation-models-explainer/

Expand full comment
Jul 21, 2023Liked by Montreal AI Ethics Institute

Earlier on in the year I made a video from my point of view as an artist and designer in a third world country. I talk about how I've seen my industry negatively affected by automation even before this generative art raised its ugly head. I hope you will give it a watch. I also did mention Abhishek Gupta in it. I appreciate what you're doing.

https://www.youtube.com/watch?v=lKMRbiz9mNE

Expand full comment
Jul 5, 2023Liked by Montreal AI Ethics Institute

(See my previous comment) Also see The African Observatory on Responsible Artificial Intelligence https://mailchi.mp/b70b7e611eb9/african-observatory-on-responsible-ai-newsletter-9341620?e=6322c7b80f

Expand full comment
Jun 28, 2023Liked by Montreal AI Ethics Institute

"With great power comes great responsibility" β€”Β easy to say, but not easy to implement. Open source AI will allow a great many more people to have more power. There are dangers in this, just as there are with concentrating the power in the hands of a few multi-nationals or government agencies.

I do think that we need to be pushing both ethics much more clearly as part of computer science in general and AI/ML in particular. It is clear that we are at a stage where this industry needs a code of conduct and focus on more than just technical understanding.

Expand full comment
Jun 21, 2023Liked by Montreal AI Ethics Institute

As with any complex regulation, the AI Act will naturally present a smaller marginal compliance costs for companies that can more easily master its complexities, raising the relative regulatory costs for smaller competitors. While this is not the goal of the EU is devising the AI, the EC and member states do need to act to redress this imbalance. For example, minimising the portion of technical documentation for AIA compliance that is not made publicly available by companies demonstrating compliance, i.e. moving from a blanket assumption of commercial confidence for all such documentation to a justified redaction process for public version. This will allow the more rapid spread of best practice, adhering to a principle that effective means for protecting health safety and fundamental rights in high risk AI categories (including risk templates, evaluation scripts, synthetic test sets for repeatable assessment of rights violations) should not be treated as a commercially confidential competitive advantage. The shift to LLMs and a more accessible market in model adaptation also motivates such open approaches to technical documentation and testing resources, as these will be needed by clients anyway. EU wide government coordination and and investment in open, interoperable schema for such testing and documentation resources and supporting their development and acceptance through sandbox trials will be required.

Expand full comment
Jun 7, 2023Liked by Montreal AI Ethics Institute

I find it difficult to accept assumption #1, i.e. that CEOs would divulge the "latest developments and capabilities of AI systems that their firms are working on." However I totally agree with the suggestion of including them in the discussion.

Expand full comment

Big Tech CEOs...hmmm, in an industry built almost entirely on competition, wouldn't that be a bit like asking fossil fuel executives to develop our environmental protection regulations??

Expand full comment