Hi Maura, thank you so much for sharing this paper with us - this looks very interesting and we'd love to feature it as a research summary. Can you send us a quick email and we can discuss? You can hit reply to the newsletter and we'll connect you to our staff. Thanks!
Earlier on in the year I made a video from my point of view as an artist and designer in a third world country. I talk about how I've seen my industry negatively affected by automation even before this generative art raised its ugly head. I hope you will give it a watch. I also did mention Abhishek Gupta in it. I appreciate what you're doing.
Very interesting and well-articulated video Roxane. Would you be interested in sharing a write-up / summary of the points you make in the video as a guest post? We'd love to feature it on the MAIEI website. You can send us an email by hitting reply on the newsletter / our contact form here: https://montrealethics.ai/contact/
Sure I'd be honoured to provide a guest post. Please let me know your requirements in terms of length, format etc, and I'll make some time for it. Also if there was any specific thing in the video that resonated with you, let me know so I can be sure to include that.
Perfect! We can send over the template - can you let us know your email address? (You can hit reply for this newsletter in your inbox and we can send it over.) Thanks!
Hi, I replied to the newsletter, but got an auto responder saying
"Your comment didn't go through. Click below to post your comment on the website".
Naturally with all the spam harvesters these days, I'd rather not post my email in the comments, so please may I ask you to get my email address off my website? It's obfuscated in the code there at least: https://www.roxanelapa.com/contact/
Absolutely, I've just submitted an entry on your form. Will also investigate the issue you mentioned about the auto-responder. Not sure why that is happening. Thanks!
"With great power comes great responsibility" — easy to say, but not easy to implement. Open source AI will allow a great many more people to have more power. There are dangers in this, just as there are with concentrating the power in the hands of a few multi-nationals or government agencies.
I do think that we need to be pushing both ethics much more clearly as part of computer science in general and AI/ML in particular. It is clear that we are at a stage where this industry needs a code of conduct and focus on more than just technical understanding.
Good point Mike. I think this is a very interesting point in the history of software where there isn't a clear argument to be made for OSS of AI systems, given some of the dangers of misuse.
As for incorporating ethics into educational programs, absolutely with you on that - the work being done at the Embedded Ethics program at Harvard is interesting: https://embeddedethics.seas.harvard.edu/
As with any complex regulation, the AI Act will naturally present a smaller marginal compliance costs for companies that can more easily master its complexities, raising the relative regulatory costs for smaller competitors. While this is not the goal of the EU is devising the AI, the EC and member states do need to act to redress this imbalance. For example, minimising the portion of technical documentation for AIA compliance that is not made publicly available by companies demonstrating compliance, i.e. moving from a blanket assumption of commercial confidence for all such documentation to a justified redaction process for public version. This will allow the more rapid spread of best practice, adhering to a principle that effective means for protecting health safety and fundamental rights in high risk AI categories (including risk templates, evaluation scripts, synthetic test sets for repeatable assessment of rights violations) should not be treated as a commercially confidential competitive advantage. The shift to LLMs and a more accessible market in model adaptation also motivates such open approaches to technical documentation and testing resources, as these will be needed by clients anyway. EU wide government coordination and and investment in open, interoperable schema for such testing and documentation resources and supporting their development and acceptance through sandbox trials will be required.
I find it difficult to accept assumption #1, i.e. that CEOs would divulge the "latest developments and capabilities of AI systems that their firms are working on." However I totally agree with the suggestion of including them in the discussion.
Thank you for sharing that Louise, that's a fair point that #1 might not always be the case though the meaningful and (more importantly) non-confrontational framing might make them more forthcoming (hopefully!) and we'd all stand to benefit in the AI ecosystem.
Big Tech CEOs...hmmm, in an industry built almost entirely on competition, wouldn't that be a bit like asking fossil fuel executives to develop our environmental protection regulations??
Your readers may be interested in the following paper that discusses the impact of generative AI on the justice system. See https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4460184. Thanks.
Hi Maura, thank you so much for sharing this paper with us - this looks very interesting and we'd love to feature it as a research summary. Can you send us a quick email and we can discuss? You can hit reply to the newsletter and we'll connect you to our staff. Thanks!
Q: Are there any good resources that distill the system-level view of LLM-based AI systems that you are aware of?
A: Useful source is this from Ada Lovelace Institute see the 'foundation model supply chain' graphic: https://www.adalovelaceinstitute.org/resource/foundation-models-explainer/
Thank you for sharing Yolanda! This is indeed a great resource.
Earlier on in the year I made a video from my point of view as an artist and designer in a third world country. I talk about how I've seen my industry negatively affected by automation even before this generative art raised its ugly head. I hope you will give it a watch. I also did mention Abhishek Gupta in it. I appreciate what you're doing.
https://www.youtube.com/watch?v=lKMRbiz9mNE
Very interesting and well-articulated video Roxane. Would you be interested in sharing a write-up / summary of the points you make in the video as a guest post? We'd love to feature it on the MAIEI website. You can send us an email by hitting reply on the newsletter / our contact form here: https://montrealethics.ai/contact/
Hi there
Sure I'd be honoured to provide a guest post. Please let me know your requirements in terms of length, format etc, and I'll make some time for it. Also if there was any specific thing in the video that resonated with you, let me know so I can be sure to include that.
Perfect! We can send over the template - can you let us know your email address? (You can hit reply for this newsletter in your inbox and we can send it over.) Thanks!
Hi, I replied to the newsletter, but got an auto responder saying
"Your comment didn't go through. Click below to post your comment on the website".
Naturally with all the spam harvesters these days, I'd rather not post my email in the comments, so please may I ask you to get my email address off my website? It's obfuscated in the code there at least: https://www.roxanelapa.com/contact/
Absolutely, I've just submitted an entry on your form. Will also investigate the issue you mentioned about the auto-responder. Not sure why that is happening. Thanks!
(See my previous comment) Also see The African Observatory on Responsible Artificial Intelligence https://mailchi.mp/b70b7e611eb9/african-observatory-on-responsible-ai-newsletter-9341620?e=6322c7b80f
Thank you for sharing this with us Yolanda!
"With great power comes great responsibility" — easy to say, but not easy to implement. Open source AI will allow a great many more people to have more power. There are dangers in this, just as there are with concentrating the power in the hands of a few multi-nationals or government agencies.
I do think that we need to be pushing both ethics much more clearly as part of computer science in general and AI/ML in particular. It is clear that we are at a stage where this industry needs a code of conduct and focus on more than just technical understanding.
Good point Mike. I think this is a very interesting point in the history of software where there isn't a clear argument to be made for OSS of AI systems, given some of the dangers of misuse.
As for incorporating ethics into educational programs, absolutely with you on that - the work being done at the Embedded Ethics program at Harvard is interesting: https://embeddedethics.seas.harvard.edu/
As with any complex regulation, the AI Act will naturally present a smaller marginal compliance costs for companies that can more easily master its complexities, raising the relative regulatory costs for smaller competitors. While this is not the goal of the EU is devising the AI, the EC and member states do need to act to redress this imbalance. For example, minimising the portion of technical documentation for AIA compliance that is not made publicly available by companies demonstrating compliance, i.e. moving from a blanket assumption of commercial confidence for all such documentation to a justified redaction process for public version. This will allow the more rapid spread of best practice, adhering to a principle that effective means for protecting health safety and fundamental rights in high risk AI categories (including risk templates, evaluation scripts, synthetic test sets for repeatable assessment of rights violations) should not be treated as a commercially confidential competitive advantage. The shift to LLMs and a more accessible market in model adaptation also motivates such open approaches to technical documentation and testing resources, as these will be needed by clients anyway. EU wide government coordination and and investment in open, interoperable schema for such testing and documentation resources and supporting their development and acceptance through sandbox trials will be required.
That's a very nuanced and accurate view of the EU AIA, we definitely agree that this would need to be addressed!
I find it difficult to accept assumption #1, i.e. that CEOs would divulge the "latest developments and capabilities of AI systems that their firms are working on." However I totally agree with the suggestion of including them in the discussion.
Thank you for sharing that Louise, that's a fair point that #1 might not always be the case though the meaningful and (more importantly) non-confrontational framing might make them more forthcoming (hopefully!) and we'd all stand to benefit in the AI ecosystem.
"Non-confrontational," as you write, is key and fundamental. Sadly, it sounds foreign; hopefully, it gets domesticated.
Big Tech CEOs...hmmm, in an industry built almost entirely on competition, wouldn't that be a bit like asking fossil fuel executives to develop our environmental protection regulations??