KG LEGAL \ INFO
BLOG

Deciphering the Code of Drug Discovery: Application of Machine Learning in Designing Compound Libraries – KIELTYKA GLADKOWSKI TAKES PART IN MEETING WITH SELVITA AT LifeScience Kraków Cluster

Publication date: October 9, 2024

Deciphering the Code of Drug

On Thursday, October 10, KIELTYKA GLADKOWSKI KG LEGAL will take part in the meeting at the Life Science Cluster with SELVITA, on “Deciphering the Code of Drug Discovery: Application of Machine Learning in Designing Compound Libraries”. The meeting will be hosted by the leading Polish biotechnology company Selvita, including a Senior Machine Learning Specialist, responsible fordevelopment of the proprietary TADAM model (Target- Aware Drug Activity Model). This is a deep machine learning model that allows for efficient high-throughput virtual screening. This model has a significant advantage over other existing solutions on the market – it is much faster and more accurate in the analyses performed, and also achieves state-of-the-art results, enabling the creation of combinatorial library subsets targeted to a specific biological target. Studies of such sets significantly increase the probability of identifying the right active compounds, which is crucial for drug development. Additionally, a separate model has been developed to predict the optimal conditions for the amidation and Suzuki reactions. This facilitates the synthesis of the targeted libraries described above using appropriately arranged catalysts, reagents and solvents for optimal performance based on substrates.

The entirety of the developed solutions is comprehensive. Their combined use allows for a much more efficient and targeted approach to the design and synthesis of compound libraries, which translates into greater effectiveness of targeting the biological target. Therefore, the developed model represents significant progress in the early stage of drug discovery.

The model described above was created as part of co-financing from EU Funds under the Smart Growth Programme 2014-2020 for the implementation of the project entitled “Creation of the ProBiAl platform for the production of targeted libraries of biologically active compounds using machine learning, integrating design, parallel synthesis and automatic purification using artificial intelligence, in order to accelerate the drug discovery process”. The entire undertaking is associated with many interesting issues not only from the biotechnological but also legal side. The use of machine-learned artificial intelligence in combining substances comes to the fore here, but this problem also concerns other issues such as database protection.

There is currently no definition of an AI system in Polish law. Such a definition appeared at the level of European Union law in the Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act) Text with EEA relevance (OJ EU L 1689, 2024; hereinafter referred to as the AI Act ), commonly known as the AI Act. This regulation does not yet have the force of law, but its provisions will apply under section 113 of the AI Act from 2 August 2026. However, this is only a general rule and selected provisions will become law at an earlier or later date, which is also provided for in the aforementioned provision.

Art. 3 item 1) of the AI Act defines an AI system as: “a machine system that is designed to operate with varying degrees of autonomy after its implementation and that may exhibit adaptive capabilities after its implementation, and that, for explicit or implicit purposes, infers how to generate outputs from received input data, such as predictions, content, recommendations, or decisions that may affect the physical or virtual environment.” The AI Act does not provide any definition of machine learning, but its terminology refers to it indirectly. For example, the concept of training data, defined in Art. 3 item 29) of the AI Act as: “data used to train an AI system by adjusting its trainable parameters,” directly refers to machine learning, which consists of training an AI model using such data.

The AI Act also regulates the concept of an AI system operator in Article 3(8), which covers many entities using artificial intelligence. One of them is the AI system provider, defined in Article 3(3) of the same regulation as: “a natural or legal person, public authority, agency or other entity that develops an AI system or a general purpose AI model or commissions the development of an AI system or a general purpose AI model and that, whether for a fee or free of charge, places on the market or makes available for use an AI system under its own name or trademark”. In addition, the AI system operator is considered to include not only entities directly related to the development of the model or its introduction to the market, but also any “entity using”. In the light of Article 3(4) of the AI Act , an entity using is: “a natural or legal person, public authority, agency or other entity that uses an AI system over which it exercises control, except where the AI system is used in the course of a personal non-professional activity”. Therefore, in the context of the discussed project, Selvita should be considered a supplier of the AI system – the TADAM model, and any entity that decides to use it professionally will be considered an entity using this system. Each supplier of the AI system introducing its artificial intelligence model to the market in the European Union is bound by the provisions of the AI Act , as evidenced by its Article 2 paragraph 1 letter a. In turn, letter b of that paragraph indicates that entities using AI with their registered office in the European Union also apply the provisions of the AI Act . The concept of the AI system operator is also of key importance for civil liability for torts resulting from the operation of artificial intelligence, as discussed in more detail in the following paragraphs.

Ensuring regulatory compliance with the AI Act is a major challenge for companies using artificial intelligence in their projects. This is due to the comprehensive regulation of prohibited practices in Chapter II of the AI Act and the comprehensive regulation of high-risk AI systems and the imposition of additional obligations on those responsible for them related to the greater risk of using this type of artificial intelligence in Chapter III. In the case of the project at hand, there are practically no legal risks associated with the use of practices prohibited under the AI Act. However, when starting any project related to the use of artificial intelligence, it is worth analyzing the legal risks in terms of compliance with the provisions of this regulation. The vacatio legis of the AI Act is slowly coming to an end, so in the next three years the regulation will apply in its entirety, so when planning a long-term business strategy, its provisions should be taken into account.

However, it should be borne in mind that the AI Act is not the only EU law that will begin to regulate the use of artificial intelligence in the future. In this context, the draft directive of the European Parliament and of the Council on the adaptation of the rules on non-contractual civil liability to artificial intelligence (AI Liability Directive) is of key importance, as proposed by the European Commission on 28 September 2022, reference number: COM(2022) 496 final 2022/0303 (COD), commonly known by the acronym AILD, which stands for: “Artificial Intelligence Litigation Directive” (hereinafter: AILD draft directive). As the name suggests, this directive is intended to create a legal framework for litigation concerning artificial intelligence, including civil liability for torts involving its use.

It is not without reason that the European Commission decided to attempt to regulate this matter by means of a directive. After all, a directive, as a source of EU law aimed at harmonizing, but not unifying, legal rules, is subject to implementation into the national law of each of the 27 Member States of the European Union. Therefore, if this directive is adopted, it should be expected that within a few years the Polish parliament will adopt an act implementing it and a possible amendment to the Act of 23 April 1964, the Civil Code (Journal of Laws of 2024, item 1061, as amended; hereinafter referred to as the Civil Code or the Civil Code) providing for the principles of civil liability for damage caused by the use of AI systems.

The draft AILD directive in art. 2 assumes consistency of the definitions used with those developed by the AI Act . In addition, the wording of art. 2 item 5) of the draft AILD directive indicates that liability for damages caused by the use of artificial intelligence systems is to be based on the principle of fault. This is of particular importance from the perspective of attributing liability for damages to the defendant, as it will not be liability based on risk simply by virtue of undertaking specific actions, such as, for example, the one provided for persons running on their own account enterprises and establishments set in motion by natural forces under art. 435 § 1 of the Civil Code. According to the assumptions of the draft AILD directive, it will be necessary to prove each time the existence of fault on the part of the defendant in the event causing the damage.

Importantly, Article 4(1) of the draft AILD Directive provides for a presumption of the existence of a causal link (in Polish civil law dogma referred to as an adequate causal link defined in Article 361 § 1 of the Civil Code). The existence of this presumption is, however, subject to a number of conditions, and it is also a rebuttable presumption pursuant to Article 4(7) of the draft AILD Directive.

What can an event causing damage and giving rise to liability for damages look like when using AI? Science and practice provide many examples. In relation to machine learning, it should be noted at the outset that the use of incorrect training data will with high probability lead the AI system to incorrect conclusions. Moreover, even when using fully substantively correct data sets, AI systems experience various types of errors, technically called dead ends or hallucinations, in which the entity using AI is not at fault. In the case in question, where we are talking about the use of AI at an early stage of drug production, such system errors may lead to consequences in the real world concerning personal injury within the meaning of art. 444 § 1 of the Civil Code or even damage related to the death of the injured party under art. 446 of the Civil Code – after all, the analyses carried out by the artificial intelligence system will have an impact on the composition of pharmaceutical products, and therefore its incorrect development may lead to consequences in the sphere of the health of people taking them.

At present, there is no specific regulation in Polish law enabling claims for damage caused by the use of artificial intelligence, which does not mean that in the current legal state the general rule of Article 415 of the Civil Code would not apply in such a case. Despite this, there is a real need to establish more detailed rules for liability for damages caused by the use of AI due to the specificity of this sector. Due to the ongoing legislative process of the AILD directive, however, one must be aware that this regulatory void will last only a few more years, which will affect the legal situation of entities using AI.

In addition, the principles of legal protection of developed solutions related to the use of AI must also be taken into account. This is a complex issue from the perspective of industrial property law. Currently, there is no possibility of patenting computer programs in Polish law, which results directly from art. 28 sec. 1 item 5) of the Act of 30 June 2000 – Industrial Property Law (i.e. Journal of Laws of 2023, item 1170; hereinafter: industrial property law). This effectively blocks the possibility of patenting an artificial intelligence algorithm understood as a specific type of computer program. Moreover, if one were to attempt to describe its method of operation using a mathematical formula, it would also not be subject to patenting under art. 28 sec. 1 item 3) of industrial property law. This is therefore a very complex and problematic issue.

Regardless of this, a much less problematic model of protecting the developed solutions should be considered as a trade secret within the meaning of art. 11 sec. 2 of the Act of 16 April 1993 on Combating Unfair Competition (consolidated text: Journal of Laws of 2022, item 1233; hereinafter referred to as the Act on Combating Unfair Competition). In light of the statutory definition, a trade secret should be considered as: “technical, technological, organizational information of an enterprise or other information of economic value, which as a whole or in a specific combination and set of their elements is not generally known to persons usually dealing with this type of information or is not easily accessible to such persons, provided that the person authorized to use or dispose of the information has taken, with due diligence, steps to keep it confidential”. If the AI system used, appropriately trained within machine learning, is considered technical or technological information, it will fit perfectly into this definition. A business secret is a secret legally protected under the Act on Combating Unfair Competition. There is no case law or doctrine confirming or denying such an interpretation, but it can be safely assumed that they will become clear in the next few years due to the growing importance of AI for conducting business activities.

Moving away from the issues of artificial intelligence, it is also worth noting that the discussed project involves the development of extremely detailed databases on which the AI system is supposed to work. Databases themselves are subject to legal protection from the perspective of both intellectual property law and criminal law. As for the first area of law, the Act of 27 July 2001 on the protection of databases (Journal of Laws of 2021, item 386, as amended) will apply to libraries created as part of the project, and therefore in the event of a violation of the rights to these databases, e.g. making unauthorized changes to them, it is possible to apply for a cessation of the violations in a civil action based on art. 11 sec. 1 item 1) of that Act. As for criminal law measures for the protection of databases, it is the establishment of the type of crime of destroying computer data in the Act of 6 June 1997, the Penal Code (Journal of Laws of 2024, item 17, as amended; hereinafter referred to as the Penal Code or the Criminal Code). The basic type of this act in Art. 268a § 1 of the Criminal Code, and the type qualified by causing significant property damage is present in § 2 of the same article. Importantly, under Art. 268a § 3 of the Criminal Code, these acts are prosecuted at the public prosecutor’s office at the request of the injured party.

Maintaining the integrity of the databases on which the AI system operates is crucial for its proper functioning. Its violations can have dramatic consequences, especially considering that in the discussed case it concerns analyses prepared for the needs of pharmacy and medicine, and therefore having a real impact on the health and lives of hundreds of thousands of users of pharmaceutical products. Therefore, while in the modern economy based on information circulation the importance of databases is invaluable, in this case it is even greater.

As indicated, the project developed by Selvita is not only fascinating from the perspective of biotechnological solutions and their translation into the pharmaceutical industry, but also due to the legal background of the modern technologies used. It must be borne in mind that technologies and processes related to their use do not operate in a vacuum – they are covered by the scope of many legal regulations.

Finally, it is worth adding that the LifeScience Kraków Cluster meeting is one of many projects organized by an entity that is part of the LifeScience Kraków Cluster. The cluster is a foundation established on January 30, 2013, whose mission is to create a network of cooperation between various partners in the life science sector, including companies operating in this sector, government and local government administration bodies supporting the development of modern technologies, scientific institutions representing the academic community of life science researchers, business environment entities providing specialist services such as law firms and consulting companies, or institutions using developed technologies such as hospitals and clinics. In addition, the cluster’s activity is to support entrepreneurship, innovation and commercialization of research results, combining and developing the resources and competences of the cluster partners, as well as supporting them in achieving development goals in the field of products, services and technologies for health and quality of life. The cluster carries out its activities at three levels: regional, national and global.

Since 21 October 2016, the LifeScience Kraków Cluster has been included in the elite list of National Key Clusters developed by the Ministry of Entrepreneurship and Technology in cooperation with the Polish Agency for Enterprise Development. Entities that have obtained the status of a National Key Cluster are of great strategic importance for the economic and social development of the country. Hence, the entry on this list of National Key Clusters is not only a specific indicator of the quality of the implemented projects, but also their real market significance.

One of the partners of the LifeScience Kraków Cluster is our law firm, which has been providing specialist legal advice services to business entities for years, including those operating in the life science sector. Our many years of experience and practical knowledge of the issues faced by entrepreneurs allow us to provide precise legal advice that meets the expectations of our clients.

Bibliography:

  1. Legal acts

Draft Directive of the European Parliament and of the Council on the adaptation of the rules on non-contractual civil liability to artificial intelligence (AI Liability Directive) proposed by the European Commission on 28 September 2022, reference number: COM( 2022) 496 final 2022/0303 (COD);

Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act) Text with EEA relevance (OJ EU L 144, 2024, item 1689);

The Act of 23 April 1964 – the Civil Code (consolidated text: Journal of Laws of 2024, item 1061, as amended);

Act of 16 April 1993 on Combating Unfair Competition (consolidated text: Journal of Laws of 2022, item 1233);

Act of 30 June 2000 – Industrial Property Law (consolidated text: Journal of Laws of 2023, item 1170);

Act of 27 July 2001 on the protection of databases (consolidated text: Journal of Laws of 2021, item 386, as amended).

  • Event organizer materials:

https://lifescience.pl/wydarzenie-klastra/sniadania-vip/rozkrypowanie-kodu-odkrytania-lekow/ (access: 09/10/2024)

https://lifescience.pl/o-klastrze-lifescience/krajowy-klaster-zdrowie/
(access: 09/10/2024)

https://lifescience.pl/fundacja/misja-fundacji-klaster-lifescience-krakow/
(access: 09/10/2024)

https ://lifescience.pl / czlonkowie -klastra / (access: 09/10/2024)

  • Other Internet sources:

https://www.poir.gov.pl/strony/o-programie/projekty/lista-beneficiary/
(access: 09/10/2024)

UP