2 Comments
User's avatar
⭠ Return to thread
Dave Lewis's avatar

As with any complex regulation, the AI Act will naturally present a smaller marginal compliance costs for companies that can more easily master its complexities, raising the relative regulatory costs for smaller competitors. While this is not the goal of the EU is devising the AI, the EC and member states do need to act to redress this imbalance. For example, minimising the portion of technical documentation for AIA compliance that is not made publicly available by companies demonstrating compliance, i.e. moving from a blanket assumption of commercial confidence for all such documentation to a justified redaction process for public version. This will allow the more rapid spread of best practice, adhering to a principle that effective means for protecting health safety and fundamental rights in high risk AI categories (including risk templates, evaluation scripts, synthetic test sets for repeatable assessment of rights violations) should not be treated as a commercially confidential competitive advantage. The shift to LLMs and a more accessible market in model adaptation also motivates such open approaches to technical documentation and testing resources, as these will be needed by clients anyway. EU wide government coordination and and investment in open, interoperable schema for such testing and documentation resources and supporting their development and acceptance through sandbox trials will be required.

Expand full comment
Montreal AI Ethics Institute's avatar

That's a very nuanced and accurate view of the EU AIA, we definitely agree that this would need to be addressed!

Expand full comment