KG LEGAL \ INFO
BLOG

Using AI for Medical Devices – legal comments

Publication date: December 09, 2024

Introduction to the considerations

Artificial intelligence is an invention that is definitely changing the world. One of the most interesting areas of its use is the medical industry. Soon, all pharmaceuticals discovered and produced using AI will be available in pharmacies. This industry is also associated with the use of artificial intelligence for medical products. On the one hand, it is about using it for clinical trials on a specific product, i.e. taking advantage of the AI program to introduce this product to the market faster. On the other hand, it is about authorizing a specific medical product whose operation will be based on the use of AI.

Studies show that artificial intelligence is a very effective tool in medicine. For example, AI systems are used in cardiology diagnostics, in as many as 91% of cases arrhythmia was detected. Additionally, they can be used in tests such as X-ray, computed tomography, or magnetic resonance imaging. AI algorithms also help in triaging patients – based on information about their health, the program can properly set the priority of medical assistance and select the appropriate medical facility.

Moving on to the proper considerations requires citing the relevant legal provisions for this topic. The issue of medical devices and their admission to trade is regulated in Regulation (EU) 2017/745 of the European Parliament and of the Council of 5 April 2017 on medical devices, amending Directive 2001/83/EC, Regulation (EC) No 178/2002 and Regulation (EC) No 1223/2009 and repealing Council Directives 90/385/EEC and 93/42/EEC (OJ EU. L. of 2017, No. 117, p. 1, as amended), hereinafter referred to as: MDR – the abbreviation comes from Medical Device Regulation. The European legislator decided to introduce a regulation instead of a directive. Such a procedure allowed for the unambiguous harmonization of the law on the discussed issue throughout the European Community. The regulation relating to the functioning of artificial intelligence is Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act)Text with EEA relevance (OJ EU L 1689, 2024), hereinafter referred to as the AI Act. The use of artificial intelligence in medical devices must always be in compliance with the regulations cited above.

The use of artificial intelligence in a medical device

The definition of a medical device can be found in Article 2, point 1 of the MDR. According to its content, a medical device is a tool, apparatus, device, software, implant, reagent, material or other article intended by the manufacturer for use – singly or in combination – in humans for at least one of the following specific medical purposes:

–           diagnosis, prevention, monitoring, prediction, prognosis, treatment or mitigation of disease,

–           diagnosing, monitoring, treating, alleviating or compensating for an injury or disability,

–           examining, replacing or modifying an anatomical structure or a physiological or disease state or process,

–           providing information through in vitro testing of samples taken from the human body, including those from organ, blood and tissue donors,

and which does not achieve its principal intended action by pharmacological, immunological or metabolic means, in or on the human body, but which may be assisted in its function by such means.

The above definition will refer to an AI system that is itself a medical device. If the system is used as part of the proper use of that device, the definition of “accessory to a medical device” should be used, which is set out in Article 2, point 2 of the MDR: an article that, although not itself a medical device, is intended by its manufacturer to be used in conjunction with one or more specific medical devices specifically to enable that medical device to be used in accordance with its intended purpose or to specifically and directly support the medical functionality of that medical device for the purposes of its intended purpose.

Medical devices using AI must be subject to appropriate qualification under the AI Act. First, the concept of an AI system should be explained – according to Article 3, point 1 of the AI Act, it means a machine system that has been designed to operate with varying degrees of autonomy after its implementation and that may demonstrate the ability to adapt after its implementation, and that – for the purposes of explicit or implicit purposes – infers how to generate results from the received input data, such as predictions, content, recommendations or decisions that may affect the physical or virtual environment. On the basis of this regulation, the division into general purpose systems and high-risk systems is very important. Article 6 of the AI Act introduces the concept of a high-risk AI system – if a given system is qualified in this way, the handling of a given medical device will be subject to additional requirements.

On the basis of the cited provision, two conditions should be analyzed , which must be met simultaneously:

  • Criterion a): Safety-related and reference to Union harmonisation legislation.
    The medical device is covered by Union harmonisation legislation, in particular the MDR Regulation. If the AI system acts as a safety-related element of such a device or is itself such a device, the premise is met.
  • Criterion b): Conformity assessment by a third party.
    In the case of medical devices (with the exception of some), EU regulations require a conformity assessment by a notified body (i.e. a third party) – this topic is developed further in the argument. If the AI system is related to the safety of the medical device or is itself a medical device and is subject to conformity assessment, the premise is also met.

The conclusion is clear – artificial intelligence systems that are medical devices or safety elements of such devices will be classified as high-risk AI systems[1].

This involves additional requirements that system users will have to meet. They are specified in Section 2 of the AI Act. According to Article 9 of the AI Act, a risk management system is required for high-risk AI systems. A risk management system is understood as a continuous, iterative process, planned and implemented throughout the life cycle of a high-risk AI system, requiring regular systematic review and updating (Article 9, paragraph 2 of the AI Act). The lack of a definition of risk in this regulation may prove problematic. Such a definition is included in the MDR – according to its Article 2, point 23, “risk” should be understood as a combination of the probability of damage and its degree of severity. Article 9 of the MDR provides information on the general obligations of manufacturers that must be met during the production and introduction of medical devices to the market, and in letter e there is a reference to “risk management”. The definition can be found in Annex I, which is identical to the one quoted earlier, except that instead of the life cycle of the AI system, it refers to the life cycle of the device. Due to the lack of a definition of risk in the AI Act, the problem arises whether two separate risk management systems should be created.

Before a medical device can be placed on the market, clinical trials will need to be conducted, as defined in Article 2, point 45 of the MDR: “clinical trial” means a systematic study involving one or more subjects undertaken to assess the safety or performance of the device. The general principles for conducting a clinical trial are set out in Article 62 of the MDR. Clinical trials of medical devices must be designed, conducted and supervised in a way that protects the rights, safety and dignity of subjects and generates reliable scientific data. They are conducted to verify the device’s compliance with the safety and efficacy requirements and to assess the benefits and risks. The trials require the consent of the Member State and the ethics committee . Participants must also give informed consent, which they can withdraw at any time during the trial. Appropriate qualifications of investigators, compliance with data protection principles and minimisation of risk to subjects are also required. The trial sites must be suitably adapted and non-EU sponsors must appoint a legal representative in the Union.

Article 61 of the MDR concerns the clinical evaluation of medical devices. In short, the regulation specifies that the confirmation of the safety and effectiveness of a medical device requires clinical data, which must be obtained in accordance with strictly defined rules. Clinical evaluation is a key element of the process of ensuring that the product meets the general requirements for safety and effectiveness. The regulation refers to Annex I, which states that the manufacturer of a given medical device shall justify why the product is effective and safe. Devices must be designed and manufactured in such a way that, under normal conditions of use, they are suitable for their intended purpose. The manufacturer will have to assure the authorities that the AI system will be free from any defects. This is very important from the point of view of the proper use of algorithms, because the result of the tests is very dependent on the quality of the data that will be taken into account. If incorrect ones are entered, erroneous results will be generated – this phenomenon is referred to as “AI hallucinations”. This can result in a completely incorrect result of the clinical trial, and consequently, the device will not be allowed on the market. It is the manufacturers’ responsibility to introduce mechanisms that will prevent the AI system from analyzing erroneous data or bypassing it in the best possible way.

Article 61(4) of the MDR states that Class III and implantable medical devices always require clinical trials (there are some exceptions when this is not necessary). According to Article 51 of the MDR, devices are divided into Classes I, IIa , IIb and III, taking into account the intended purpose of the devices and the risks associated with them (…). The concept of medical devices is very broad, from a wheelchair to a heart valve or a prosthesis. It can be safely assumed that those using AI will be more advanced and therefore will pose a greater risk of their use. This means that under the MDR, in the vast majority of cases it will not be possible to omit a clinical trial.

In the context of AI systems, it is worth paying attention to Article 15 of the MDR, which specifies the activities of the person responsible for regulatory compliance. This provision imposes an obligation on manufacturers and their representatives to ensure that their medical devices comply with legal regulations and technical standards. The person responsible for regulatory compliance plays a key role in supervising documentation, quality control and compliance of devices with legal requirements. This is to ensure patient safety and compliance with EU regulations. An exception applies to small companies, which can use the services of external specialists instead of employing such a person on a permanent basis. Companies that introduce a medical device based on an AI system to the market will have to employ at least one specialist who has appropriate expertise in AI and regulatory issues in the field of medical devices . The fact of having it must be demonstrated by at least one qualification:

  1. a diploma, certificate or other evidence of formal qualification awarded on completion of a university course of study or a course of training recognised as equivalent by the Member State concerned in law, medicine, pharmacy, engineering or another relevant scientific discipline and at least one year of professional experience in regulatory affairs or quality management systems in the field of medical devices;
  2. four years of professional experience in regulatory affairs or quality management systems in the field of medical devices.

It is undoubtedly worth recognizing that artificial intelligence is increasingly helping doctors diagnose diseases. Empatica took advantage of its benefits in research on epileptic seizures. Patients used smartwatches that constantly monitored their health. The program, through exposure to specific body behaviors, learned to recognize when a seizure occurs. However, the biggest problem with EU regulations is the classification of these medical devices as high-risk AI systems, as this may make it difficult to enter and maintain a given system on the market. In the context of these considerations, it is also worth considering whether it is possible to accelerate clinical trials using artificial intelligence. Since it has been proven that the products themselves are more effective, there is no obstacle to asking such a question.

Accelerating Clinical Trials with AI

Clinical trials are a very laborious process, regulated by the EU legislator to a large extent. Entities involved in these activities must meet a number of conditions that directly result from the content of the MDR. Annex XIV to the aforementioned regulation regulates the requirements for conducting clinical trials. They must be conducted in accordance with recognized ethical principles at every stage – from planning, through implementation, to publication of results. This means that the studies are aimed at protecting the rights, dignity and safety of participants, while ensuring scientific reliability. This aspect is related to the issue of personal data protection, which results from Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) (OJ EU. L. of 2016, No. 119, p. 1, as amended), hereinafter referred to as: GDPR. Databases used to train AI models must be populated with the appropriate data of individuals that will be needed to conduct the study. In this context, physiological data are significant, which, according to Article 4, point 1 of the GDPR, may be considered personal data if it is possible to identify the person to whom the data relates. This raises ethical concerns about the processing of such data[2].

The second aspect of Annex XIV is the use of appropriate research methodology, which is reflected in the following aspects:

  1. Research plan:
    • Research must be based on a solid plan based on current scientific and technical knowledge.
    • The purpose of the plan is to confirm or refute the manufacturer’s claims regarding the safety, effectiveness, and benefit-risk ratio of the investigational device.
    • An adequate number of participants must be ensured for statistically significant conclusions.
    • The statistical methodology and study design must be adequately justified.
  2. Adaptation to the tested product:
    • The procedures and methods used in the test must be consistent with the characteristics and intended use of the product being tested.
  3. Representativeness and study conditions:
    • Studies should be conducted by appropriately trained personnel in a clinical environment that reflects the actual conditions of use of the device in the target population.
  4. Study endpoints:
    • They must be clearly defined, relevant to the intended purpose of the device, and include clinical benefits, safety and effectiveness.
    • The assessment of endpoints must be based on validated scientific methods.
  5. Access to information:
    • Researchers must have access to all technical and clinical data relating to the device.
    • Personnel participating in the study must receive appropriate training in the use of the device, study principles, and good clinical practice.
  6. Research report:
    • Once the study is completed, a detailed report is prepared, signed by the researcher, including all data – both positive and negative results.

Conducting clinical trials, in accordance with the requirements cited above, is very time-consuming – it can take up to several years[3]. Artificial intelligence has many advantages in the process of obtaining information – AI can find, relatively quickly, appropriate individuals who have data relevant to a specific study. The American NIH (National Institutes of Health) indicates that AI tools, such as the TrialGPT algorithm, can speed up the process of matching participants to clinical trials. Automated analyses help doctors select appropriate participants faster, reducing the time needed for the recruitment process while maintaining high accuracy in assessing participant qualifications. This significantly shortens the duration of the study and streamlines the recruitment process[4].

In addition, artificial intelligence programs are more effective in the context of analyzing already collected data, they do it much faster[5]. For example, electronic devices that study participants wear during tests come to the rescue. They analyze the patient’s health on an ongoing basis, without the need for doctors to intervene. Thanks to AI, it is also possible to design the tests themselves. Algorithms simulate appropriate tests based on specific data entered into the system – at first, these may be fictitious data. This method allows you to select the right strategy to obtain the most optimal test result. Additionally, artificial intelligence has much greater processing power than entities conducting clinical trials. Humans are unable to analyze such a huge amount of data. Thanks to this, AI provides very accurate and extensive reports on tests, which allows you to detect more imperfections in a given product[6].

Summary

Artificial intelligence is becoming a normal tool, both in the operation of medical devices and in the acceleration of clinical trials. However, the accuracy and effectiveness of these studies depend not only on the data used to train the system, but also on a properly written AI program. Every manufacturer using artificial intelligence should bear in mind the legal provisions cited in this argument, which are in force or will soon be in force throughout the European Union. The biggest problems seem to be issues related to the processing of personal data under the GDPR regulation and AI hallucinations. The ethics of conducting clinical trials must also be a very important point of reference – according to the MDR, the inherent rights of an individual participating in the study cannot be violated. The topic is open and will certainly be the subject of further doctrinal considerations. In particular, the issue of qualifying a medical device using artificial intelligence as a high-risk AI system requires regulation. Such a decision by the EU legislator may result in manufacturers bypassing the law or refraining from introducing a given product to the EU market.


[1]K. Stelmasiak, M. Świerczyński, Z. Więckowski, Clinical trials of medical devices using intelligent algorithms – an introduction to the discussion, [in:], Law in action Civil cases 50/2022, E. Holewińska – Łapińska (ed.), Institute of Justice, Warsaw, p. 125.

[2]M. Melke, J. Greser, AI in clinical trials: a chance for development?

[3]M. Beckwith, How Long Do Clinical Trials Take for Medical Devices?

[4]Q. Jin , Z. Wang , C.S. Floudas , F. Chen, C. Gong, D. Bracken -Clarke, E. Xue , Y. Yang, J. Zhiyong Lu, Matching patients to clinical trials with large language models, https://www.nature.com/articles/s41467-024-53081-z,

[5]S. Kubala, Improving data management and patient monitoring with AI , https://webmakers.expert/blog/usprawnienie-zarzadzania-danymi-i-monitorowania-pacjentow-dzieki-ai

[6] ITRex , Why Using Artificial Intelligence in Clinical Trials is Becoming the New Normal , https://hackernoon.com/why-using-artificial-intelligence-in-clinical-trials-is-becoming-the-new-normal

UP