What does the EU AI Act mean for the future of AI regulation?

What is the EU AI Act?

Artificial intelligence (AI) is rapidly evolving and we are seeing an increase in AI technologies, from Chat GPT to deepfakes. Whilst the development of AI could bring about positive advancements in various fields - especially healthcare - we must ensure that the most important aspect of utilising this technology is covered…Regulation. 

In reaction to the current landscape of AI and its pace of progression, the European Commission is setting a standard with its new approach to AI regulation through the introduction of the EU Artificial Intelligence Act

Ahead of the UK, US, or Asia, the EU AI Act is the first pan jurisdictional, pan governmental attempt to regulate AI that we've seen globally. It aims to establish clear requirements and obligations for the development, marketing and use of AI in the European Union, especially the use of high-risk AI like medical AI. 

The regulatory framework draws a line in the sand, signalling that whilst the use of AI systems are welcome in the European market, stringent safety measures must be in place. After all, regulation is there for one very important reason, and that is to reduce any potential harm from any product.

A regulatory framework for artificial intelligence

For those of you developing medical AI systems, don’t panic just yet, the Act is still in draft form and is subject to review and a final vote before it is written into law. As well as this, each European member state will have the right to decide whether or not they’ll adopt the act – meaning this regulation could apply in some EU countries and not others. Confusing? Maybe, but let me explain. 

My bet is that most countries will adopt it in some form or another, but it’s not guaranteed and its implementation across Europe won’t be automatic. There will also be a "transition period," which is expected to start late this year or early 2024, in which businesses operating in the EU will have approximately 24 to 36 months to comply with the new regulations – similarly to the UK’s Medical Device Regulation (MDR). 

It’s important to keep in mind that, having now left the EU, the UK will need to establish its own AI regulation, either from scratch or by using the EU framework as a starting point. It’s not yet clear which route the country will take.

How will AI be regulated?

Now let’s get into the depths of it. The EU AI Act classifies devices into four risk categories: minimal risk (currently proposed to be unregulated), limited risk, high risk, and unacceptable risk.

Minimal risk 

Under the minimal risk category, AI systems could be developed and used in the EU without conforming to additional legal obligations. However, it is proposed that codes of conduct will be developed to encourage providers of non-high-risk AI systems to voluntarily apply the mandatory requirements for high-risk AI systems.

Limited risk

Limited risk AI like those that interact with humans through chatbots, emotion recognition systems, or systems that generate or manipulate image, audio or video content like deepfakes would be subject to a set of transparency obligations. 

High risk

Medical AI devices, such as those involved in clinical decision support or diagnosis, will fall under the high risk category (Clause 30, 75) as most AIaMD perform clinical decision support or diagnosis. 

Importantly, the EU AI Act acknowledges that AI is not limited to stand-alone applications or its integration within medical devices to process medical data for a clinical purpose. It recognises that AI can also serve as a safety component within these systems. As such, the Act emphasises the need for 'embedded AI' to also demonstrate compliance, especially if it is a part of a Class III system. Therefore, if you are developing safety critical components using AI for medical software, you need to be aware of this! 

For my frequent readers, you’ll remember the blog I wrote recently on the role of Apple and Google to tame the health app wild west. As distributors of medical apps they have a responsibility under EU MDR to check for a CE mark on the apps on their app stores. This new Act doubles down on this, and once in law, Apple and Google will be classified as distributors of AI systems, upholding them further to the responsibility to check the compliance status of the systems they are distributing prior to listing them on their stores. 

Unacceptable risk 

The EU AI Act introduces an unacceptable risk category  – which I’ve never seen before in medical regulation. This aims to safeguard against potential use cases that could have severe societal consequences by making its development illegal. For example, AI that ranks people based on scoring systems in order to decide whether or not they're allowed to get a job or buy a house. 

How will AI regulation impact you?

Class III medical devices will require a conformity assessment, including obtaining a CE mark through an audit conducted by a Notified Body, similar to the process outlined in the EU MDR. But, as per Annex II, you won't need a separate CE mark, instead your EU AI Act compliance will be audited alongside the existing EU MDR.

In short, this means that medical device regulations will still stand, it’s just that if you’re developing medical AI there will be a couple of extra documents to complete before you are ready for an audit by a notified body.

For example, whilst AI medical devices are already audited to ISO 13485, the EU AI Act will require companies to incorporate elements of ISO 42001 (once published and harmonised) into their QMS. Expect your future QMS audits to assess compliance with both ISO 13485 and AI-specific requirements!

It’s never too early to start thinking about your regulatory strategy, and Hardian can help

Whilst immediate action may not be required, it is crucial to remain attentive to the evolving regulatory landscape. If you’re thinking of developing AI it’s incredibly important for you to define your intended use statement now and be having discussions on how the EU AI Act will impact you. Remember, preparation is key in navigating the changing landscape of AI regulation, and it is never too early to start thinking about it!

At Hardian, we’re closely monitoring these developments and taking proactive steps to align our internal processes and documentation with the upcoming EU AI Act standards. By doing so, we aim to ensure that we are fully prepared to help clients with the requirements when the time comes.

Hardian Health is a clinical digital consultancy focused on leveraging technology into healthcare markets through clinical strategy, scientific validation, regulation, health economics and intellectual property.

Dr Hugh Harvey

By Dr Hugh Harvey, Managing Director

Previous
Previous

Making the Unexpected Expected: A Founders’ Guide to Unannounced Audits

Next
Next

Taking the pain out of implementation for AI medical devices