Ellen Keenan-O’Malley11 May 2023 marked the day the European Parliament’s Committees gave the green light to the proposed European Artificial Intelligence Act (“EU AI Act”). It is now scheduled for a final debate in Parliament on 13 June 2023. Some members of European Parliament are heralding it the “landmark legislation for the digital landscape, not just for Europe but also for the entire world” (Brando Benifei, Italian Member of European Parliament and AI Act co-rapporteur).

However, not all countries see Europe’s “pro-consumer” approach as the only way to regulate the fast-evolving technology. The UK revealed its approach to regulating AI in its AI White Paper on 29 March 2023, which is not dissimilar to the US Blueprint for an AI Bill of Rights. Both diverge towards de-centralized, context-specific, sector-based, and industry-driven frameworks. Similarly, China has revealed draft measures for managing generative artificial intelligence services, which include delegation of penalties to existing laws and administrative regulations.

Impact of divergence of approach

Given the nature of how AI products are broadly deployed, creating a ‘one-size-fits-all’ regulatory framework for technology is difficult as evidenced by the geopolitical disagreements on the best approach to govern it. However, technology moves faster than regulation. As such, it could be that strict regulatory frameworks such as the EU’s proposed approach could end up being an obstacle to progress and to innovation, and approaches such as the UK’s may be looked upon more favorably.

Nonetheless, at the heart of all proposed AI regulation is a consistent view of the need for some form of control over AI. As demonstrated by the various reactions globally to restrict ChatGPTTM. Yet, a lack of a universal regulatory framework is where the challenges could arise for organizations that develop and use AI across multiple jurisdictions. Although some countries appear aligned, such as the UK and US with their five principles approach to guide the design, use, and deployment of automated systems, international companies cannot ignore differing obligations such as the EU AI Act, especially given the potential eye-watering fines if found in breach. Therefore, it may be that companies have to comply with the most onerous regulations (despite other territories being more practical) to avoid their technology falling foul of the more onerous territories’ obligations and being fined.

Liability is also made complicated by complex AI value-chains that can incorporate many different actors in different roles. The UK Government recognizes this and has therefore, delegated the role of defining liability to the sector-specific regulators; but whilst UK businesses are waiting for the regulators to issue such guidance, providers and distributors of AI are left in limbo about what non-compliance means in terms of penalties and whether they are or will be compliant – or not. Conversely, the EU approach sets out clear thresholds of risk categories along with the obligations companies must comply with, and failure to do so results in a blanket fine, making it potentially easier for companies to just choose to align with the EU AI Act requirements from the outset of development.

Ultimately it will be a little time to see how the EU AI Act will work in practice, and countries outside of the EU will be observing very closely how businesses react to the new rules and how they will be enforced.

What should companies do now?

  • Assess risk management strategy and impact of regulations: It is critical that businesses take immediate steps to begin to map what AI technology they have and assess the potential impact of the regulatory frameworks to assess the potential impact on their business. Any companies currently developing and using in-house AI tools or licensing AI should consider whether their existing governance measures are adequate.
  • Determine who is responsible for AI governance: Senior leaders should identify who is responsible for AI governance and risk management within the organization and consider setting up a dedicated AI governance team dedicated to this topic. Experience of cyber security issues arising in the context of data protection regulation suggests that this should be considered at board level.
  • Consider reviewing your liability provisions: With new guidance and legislation on AI fast approaching, and with potentially eye-watering fines under the EU AI Act, businesses should consider the flow of possible liabilities through their licenses and other commercial agreements. Businesses should assess whether it is appropriate or necessary to update existing contracts with third parties to mitigate any risk of liability, through the allocation of responsibilities under the contract and liability caps or indemnity provisions.

Written by Ellen Keenan-O’Malley, EIP Solicitor and Codiphy Team Member

EIP

| MORE NEWS |  | WRITE FOR OUR NEWSLETTER |

 

The Patent Lawyer - Logo

Subscribe To Our Newsletter

Would you like to receive our popular weekly news alerts straight to your inbox? Solely patent focused and only sent once a week means you can guarantee there will be something you are interested in reading instead of clogging up your inbox with junk. Sign up now!

You have Successfully Subscribed!