Publication date: November 29, 2024
Artificial intelligence is increasingly used in research on the search for new drugs. Pharmaceutical companies are investing in AI programs that will allow for the production and introduction of new drugs to the market more efficiently. Studies show that the market for artificial intelligence in the process of discovering new drugs will increase fivefold in 5 years [1]. The efficiency of this technology should also be addressed. Boston Consulting Group has prepared a report on the issue of the number of active molecules discovered by AI [2]. The authors of the review confirmed that thanks to artificial intelligence, a doubling of the overall productivity of pharmaceutical research and development can be expected. It should be remembered, however, that the molecules discovered by AI are still in the testing phase, but the results are satisfactory so far. Due to the growing popularity of using AI in this way, legislators must regulate these issues in generally applicable laws. Lawmakers in the United States and the European Union are introducing new regulations regarding the regulation of AI systems. This may have a significant impact on the production of medical devices and their admission to trade.
The introduction of new drugs to the market in the United States is the responsibility of the Food and Drug Administration (FDA). The agency is divided into several centers, which are responsible for specific activities. From the perspective of this discussion, the most important will be the Center of Drug Evaluation and Research (CDER). Its tasks include: reviewing applications for drug approval, administering regulations on good drug manufacturing practices, determining which drugs require a prescription and which do not, monitoring advertising, and collecting and analyzing data on the safety of drugs that are already on the market.
In 2024, the AI Council was established at the center, which was entrusted with a number of tasks. First and foremost, the Council’s goal is to oversee, coordinate, and consolidate CDER’s activities related to the use of AI and to support innovative activities. Moreover, the Council will help CDER implement the requirements specified in the Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. The legal act was signed by the President on October 30, 2023. The provisions of the executive order are intended to protect citizens from abuses that may result from the use of artificial intelligence. Manufacturers of advanced and potentially dangerous AI programs will be required to report the results of security tests to the federal government. However, the regulation lacks provisions regarding the protection of personal data. Instead of including them in this legal act, Congress was called upon to create appropriate regulations to protect US citizens from the use of their sensitive data to train an AI model.
The regulation established new competences for the National Institute of Standards and Technology. Its role is to create and take care of the development of new standards. These developed standards are later to be implemented for testing in sectors of American critical infrastructure [3].
The Polish equivalent of this institution is the Central Office of Measures. From the perspective of the topic of this argument, the most important topic addressed in the regulation is certainly medicine. It was indicated that one of the most important points of the discussed regulation is the creation of inexpensive and life-saving drugs. The Department of Health and Health Care will be tasked with preventing dangerous practices related to the use of artificial intelligence in health care.
Additionally, the AI Council at CEDR will perform the following functions:
The FDA also developed a document titled “Artificial Intelligence and Medical Products: How CBER, CDER, CDRH, and OCP are Working Together” which was published in March 2024. The title point will be the collaboration of all FDA offices in the context of artificial intelligence and drugs. The document indicates that AI has the potential to revolutionize healthcare by:
The FDA is committed to working with a broad range of stakeholders to develop a patient-centric regulatory approach through stakeholder consultations to ensure the appropriate quality and transparency of research. Emphasis was also placed on ensuring the security of data needed to train the AI model. Moreover, the agency drew attention to international cooperation in the process of creating artificial intelligence. To ensure the consistency of AI systems and models, global harmonization of standards, guidelines and best practices should be sought to ensure consistency in the evaluation and use of AI. The FDA plans to introduce guidelines that will allow for the evaluation of algorithms – primarily whether they will accurately perform the assigned tasks. Issues regarding transparency, security and resilience to cyber threats were also raised – data used in the drug development process are most important for the proper course of research and must be protected at all costs. The role of the agency will also be to monitor the performance of AI tools to ensure their compliance with standards and reliability, and to support projects that take into account fairness in access to these technologies – the entire document concerns promoting the responsible and ethical development of AI in healthcare.
The European Union has a European Artificial Intelligence Office. It was established by the Commission Decision establishing the European Artificial Intelligence Office of 24 January 2024. According to Article 1 of the Decision, the Office shall carry out the tasks specified in Article 3 for the implementation and enforcement of the future regulation laying down harmonised rules on artificial intelligence. The future regulation is of course Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act) Text with EEA relevance (OJ EU L 1689, 2024), commonly known as the AI Act. The European Union has created a similar AI supervisory authority to the United States, but a few months later. Additionally, this authority does not only deal with health care, but with every aspect for which artificial intelligence can be used. It should be emphasized that the AI Act is not yet in force in its entirety, because according to its Article 113 it enters into force on August 2, 2026. As already indicated, Article 3 of the decision defines the competences of the office that will perform its tasks after the AI Act enters into force. Most of the tasks refer to the control of “general purpose” AI models and systems. The definition of a general purpose AI model is set out in Article 2, point 63 of the AI Act, it is: an AI model, including an AI model trained with a large amount of data using large-scale self-supervision, that demonstrates significant generality and is able to competently perform a wide range of different tasks, regardless of the way in which the model is introduced to the market, and that can be integrated with various downstream systems or applications (…). A general purpose AI system is defined in Article 2.66 of AI Act: An AI system based on a general purpose AI model that can serve a variety of purposes, and that is suitable for both direct use and integration with other AI systems. However, it is not known whether an AI model or system that is written for the purpose of creating new drugs will qualify as “general purpose.”.
The appropriate qualification of an AI model or system is important from the perspective of these considerations, because the AI Act also defines “high-risk” AI systems or models. If a given program is classified in this way, the pharmaceutical company offering it to its contractors will have to meet additional requirements. The principles for classifying high-risk AI systems are specified in Article 6 of the AI Act. Paragraph 2 of this article refers to Annex III, which explicitly lists high-risk AI systems. Paragraph 1 also defines what such a system is, but in a different way – the provision specifies two conditions that must be met together. Certainly, the qualification of a given system as a high-risk system is not unambiguous, first of all, the AI Act does not explicitly list an AI system used to create drugs as “high risk”. Chapter III, Section 2 of the AI Act specifies the requirements for high-risk AI systems. Article 9, paragraph 1 of the AI Act provides information on the risk management system that must be created, implemented, documented and operated. Paragraph 2 defines this system: a risk management system is understood as a continuous, iterative process, planned and implemented throughout the life cycle of a high-risk AI system, requiring regular systematic review and updating.
It is worth noting that the American regulation does not contain divisions into different types of AI models or systems. The European Union has introduced a legal act that in fact strongly regulates the use of the benefits of artificial intelligence. We can also point out here Article 5 of the AI Act, which directly defines prohibited practices in the field of artificial intelligence. The executive order of the US President only indicates in a positive way what conditions a specific artificial intelligence system must meet. Certainly, pharmaceutical companies will be able to develop more dynamically in the United States than in the European Union due to the more liberal approach to AI. The decree itself indicates that the United States is to be the leader in the artificial intelligence market. In order to maintain this position, the federal government will offer assistance in research on artificial intelligence in healthcare and easier access to databases and algorithms by scientists. However, similar regulations are missing in European Union legislation. It should be remembered, however, that the creation of drugs using artificial intelligence cannot be done, as in the case of other technologies, by trial and error. Pharmaceutical companies have a very great responsibility because human health and sometimes life are at stake.
This text emphasizes that the US executive order does not have adequate regulations on the protection of personal data needed to train AI models. The President only called on Congress to create appropriate regulations. It seems that in this respect the European Union is one step ahead of the US, as it is indicated that the Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) (OJ EU. L. of 2016, No. 119, p. 1, as amended), hereinafter referred to as GDPR, must apply to the above-mentioned data. However, the problem arises as to whether providing the artificial intelligence system with personal data of a specific natural person may be inconsistent with the provisions of the GDPR. It is indicated that the AI Act and the GDPR must complement each other, and a person using artificial intelligence and the personal data of a specific person should be familiar with these two legal acts[4]. The AI system can operate on an open basis – this means that the program uses resources that can be found on the Internet automatically; user intervention is not necessary. A closed system means that the person using this system selects the data themselves, which they then enter into the program – in this case, they have greater control over the data they use. However, any information about the patient will be considered personal data requiring protection. Art. 4 sec. 1 item 1 contains definitions of personal data: this is any information about an identified or identifiable natural person (“data subject”); an identifiable natural person is a person who can be identified directly or indirectly, in particular on the basis of an identifier such as a name and surname, identification number, location data, an online identifier or one or more specific factors determining the physical, physiological, genetic, mental, economic, cultural or social identity of a natural person. In particular, this will be the patient’s first and last name and one of the factors listed in the list. The definition of processing these data has been specified in Article 4, paragraph 1, point 2 of the GDPR: means an operation or set of operations performed on personal data or sets of personal data by automated or non-automated means, such as collection, recording, organization, structuring, storage, adaptation or modification, retrieval, consultation, use, disclosure by transmission, dissemination or other types of making available, matching or combining, restriction, deletion or destruction.
Processing data in accordance with the law, i.e. in this case feeding it into the AI program, is possible on the basis of art. 6 sec. 1 of the GDPR. Sub-item a is the simplest and safest option for the user of the AI system – it is about the consent to the processing of a given natural person. The user of artificial intelligence must also inform the person in charge of the personal data accordingly. This requirement is specified in art. 13 of the GDPR, which specifies what information the administrator must provide to the person from whom the data is collected. A very important provision in the regulation in question is the right to withdraw consent to the processing of personal data, specified in art. 7 sec. 3 of the GDPR. The provision very clearly defines all issues related to this: the data subject has the right to withdraw consent at any time. Withdrawal of consent does not affect the lawfulness of the processing carried out on the basis of consent before its withdrawal. The data subject is informed about this before giving consent. Withdrawing consent must be as easy as giving it [5]. The United States currently does not have similar regulations, despite a year having passed since the signing of the executive order. The work of Congress should be closely monitored in the hope that the American legislator will decide to introduce similar regulations.
In this case, the American legislator also did not specify what liability for errors caused by artificial intelligence looks like. The European legislator decided to introduce special provisions regulating the issue of non-contractual liability for the operation of artificial intelligence. A draft Directive of the European Parliament and of the Council on the adaptation of the rules on non-contractual civil liability to artificial intelligence (AI Liability Directive) was proposed (Text with EEA relevance). The acronym ALID is commonly used, which comes from “Artificial Intelligence Litigation Directive“. It should be borne in mind that the directive will require its provisions to be implemented into the legal systems of the Member States.
The draft repeatedly refers to the AI Act discussed earlier, for example it uses the same definitions of an AI system and a high-risk AI system. Article 1(2) of the ALID states that the directive applies only to civil, non-contractual and fault-based claims for damages, and the provisions of the directive do not apply to criminal liability. This means that if a user of an AI system causes harm to a patient, and the act cannot be attributed to him, it will not be possible to determine liability under the ALID. However, according to Article 5 of the ALID, the Commission will submit a report to the European Parliament, the Council and the European Economic and Social Committee within 5 years after the end of the transposition period on the assessment of the adequacy of the provisions on strict liability for claims against operators of certain AI systems.
Artificial intelligence can make mistakes, especially when its operator uses bad data to train the model. After entering this data, incorrect data will be generated – this phenomenon is referred to as “AI hallucinations”[6]. The cause of hallucinations may be the so-called overtraining of the model, i.e. a situation in which the program determines the result of calculations too precisely, will not be able to generalize and adjust this result appropriately. In addition, hallucinations may be caused by incorrect model architecture. If the program is poorly programmed from the beginning, it is impossible for the result to be correct. The person entering the data into the AI system will be responsible for all such errors. According to Article 4(1), for the purposes of a claim for damages, national courts presume the existence of a causal link between the defendant’s fault and the result obtained by the artificial intelligence system or the fact that such a system did not obtain a result, however, for this to happen, certain conditions must be cumulatively met:
The presumption of fault applies if the plaintiff proves that the defendant did not comply with the duty of care specified in national or EU law, and that duty directly served to protect against the damage that occurred. If the plaintiff is unable to prove this, the court may rely on the presumption of fault resulting from previous findings.
The presumption of fault is rebuttable, which means that the defendant will be able to rebut it by presenting appropriate evidence, which is clearly stated in Article 4, paragraph 7 of ALID. This solution is certainly more beneficial for suppliers and users of artificial intelligence systems than strict liability [7]. The damages in question can be very dangerous to public health in the case of creating medicines. The strict definition of the provisions in question is a good idea, as it will allow easy familiarization with the risk of causing culpable errors.
The implementing regulation uses very general phrases and, as it were, postulates on how artificial intelligence should be treated in trade. Another important issue that was raised in this legal act is counteracting discrimination and social inequalities. In the context of medicines, this may mean that practices that prevent a given social group from participating in research will be prohibited. In this way, the American legislator can strive for equal access and the development of medicines using artificial intelligence. As indicated earlier, in the United States, the FDA is responsible for authorizing new medicines for trade. When making a comprehensive assessment of a given substance, it takes into account all the elements of a given medicine.
AI Act, in contrast to the American regulation, in Article 5 specifies what practices are prohibited in the scope of AI. The European legislator, establishing this provision, was guided by moral considerations, which can be directly inferred from the text of the regulation. As for the introduction of new medicines to the market, the appropriate regulation can be found in Regulation (EU) No 536/2014 of the European Parliament and of the Council of 16 April 2014 on clinical trials on medicinal products for human use and repealing Directive 2001/20/EC, hereinafter referred to as: Regulation 536/2014. The start of sales of a new medicine must be preceded by conducting appropriate clinical trials. According to Article 4 of Regulation 536/2014, a clinical trial is subject to scientific and ethical evaluation and requires authorization in accordance with this Regulation. The entity required to conduct the appropriate trial, in accordance with the definition set out in Article 2 point 14 of Regulation 536/2014, an ethics committee, i.e. an independent body established in a Member State in accordance with the law of that Member State and entitled to issue opinions for the purposes of this Regulation, taking into account the views of laypersons, in particular patients or patient organisations. It is the ethics committee that will have to take the appropriate decision on whether a given medicine can be placed on the market.
The nature of the acts in question is completely different. The US President’s regulation is an act that generally outlines the direction in which artificial intelligence should be developed. The EU legislator has proposed clearly defined prohibitions and all rules regarding the regulation of artificial intelligence systems. The US regulation places much less emphasis on limiting the introduction of AI software to the market. The European Union also intends to introduce an act that will clearly define civil liability for damage caused by artificial intelligence – such ideas, however, are lacking in the United States. It can be assumed that the AI user will be liable on general principles. The Americans want to be the leader in the artificial intelligence market, which is why their regulations are highly liberalized – the goal has been indicated, which is the dynamic development of this technology. The AI Council operating within CDER is to help create new pharmaceutical technologies using artificial intelligence. Thanks to such regulations, the United States will certainly overtake the European Union in the process of creating drugs with the help of AI.
Sources:
https://digital-strategy.ec.europa.eu/pl/policies/ai-office
https://panoptykon.org/stany-zjednoczone-reguluja-sztuczna-inteligencje-nie-sa-jedyne
https://www.prawo.pl/zdrowie/wykorzystanie-ai-w-branzy-lekowej-ryzyka,529783.html
https://digital-strategy.ec.europa.eu/pl/policies/regulatory-framework-ai
https://www.prawo.pl/biznes/relacja-ai-act-i-rodo-problemy-dla-firm,527105.html
https://www.ifirma.pl/blog/factchecking-i-halucynacje-ai/
https://www.prawo.pl/biznes/odpowiedzialnosc-za-sztuczna-inteligencje-beda-przepisy,517822.html
https://eur-lex.europa.eu/legal-content/PL/TXT/?uri=CELEX%3A52022PC0496&qid=1665410785599
[1] Newseria, Pharmaceutical giants invest in artificial intelligence. They want to develop drug candidates faster and cheaper, https://biznes.newseria.pl/news/giganci-farmaceutyczni,p379494844
[2] M. KP Jayatunga, M. Ayers, L. Bruens, D. Jayanth, C. Meier, How successful are AI-discovered drugs in clinical trials? A first analysis and emerging lessons, https://www.sciencedirect.com/science/article/pii/S135964462400134X?via%3Dihub#s0040
[3]M. Fraser, Biden signs regulation regulating artificial intelligence. What does it contain?, https://cyberdefence24.pl/polityka-i-prawo/biden-podpisal-rozporzadzenie-regulujace-sztuczna-inteligencje-co-zawiera
[4]I. Stawicka, AI Act and GDPR Relations Will Soon Cause Problems for Companies, https://www.prawo.pl/biznes/relacja-ai-act-i-rodo-problemy-dla-firm,527105.html
[5]I. Stawicka, The use of AI in the creation of new drugs generates legal risks , https://www.prawo.pl/zdrowie/wykorzystanie-ai-w-branzy-lekowej-ryzyka,529783.html
[6]MM Kania, Factchecking and AI hallucinations, or verification of the effects of working with AI, https://www.ifirma.pl/blog/factchecking-i-halucynacje-ai/
[7]R. Skibińska, Liability for artificial intelligence will be regulated , https://www.prawo.pl/biznes/odpowiedzialnosc-za-sztuczna-inteligencje-beda-przepisy,517822.html