Publication date: February 24, 2025
Regulation 2024/1689 (AI Act) is the first regulation of the issue of artificial intelligence in EU law. It was issued in response to concerns related to the use of AI, including the risks of manipulation, surveillance and unequal treatment of citizens. The first provisions of the AI Act entered into force on 2 February 2025. They cover primarily the issues of AI literacy, prohibited practices and high-risk AI.
AI literacy can be defined as a set of competences that allow users to understand, critically evaluate, use and monitor AI systems in various contexts – from everyday life to work and education. Article 4 of the Regulation imposes an obligation on entities providing and using AI to ensure an appropriate level of competence of employees in the use of AI. This provision is extremely general. It requires that the competences of employees be appropriate in relation to their technical knowledge, experience, education and training and that they correspond to the context in which AI will be used and to which groups of people it will be used.
In connection with the content of this regulation, it is necessary for the employer to train employees using AI, in a manner appropriate to the given context. It is therefore not possible to determine in advance what skills of employees will be sufficient to fulfill the obligations related to AI literacy. It is possible to indicate certain key skills that will be helpful in working with AI. These include, among others:
a/ Understanding how AI works
– Basic knowledge of AI
– Types of AI
– How AI learns and makes decisions
b/ Ability to use AI
– Interacting with AI
– Understanding AI’s limitations
– AI Management
c/ Critical analysis of AI activities
– Identifying AI Errors and Biases
– AI Credibility Analysis
– Assessing the quality of data used to train AI models
d/ Ensuring data privacy and security
– Knowing how AI collects and uses data, including personal data
– Preventing illegal collection and processing of data
e/ Knowledge of AI regulations
– Knowledge of AI Act regulations
– Knowledge of internal procedures for dealing with AI
– Knowledge of control mechanisms for the use of AI
To help companies meet their AI Literacy obligations, the European Artificial Intelligence Office is issuing a Living repository to foster learning and exchange on AI literacy. This is a constantly updated set of practices used by companies that use AI in their operations. It is not an exhaustive list of ways to deal with AI, but it can serve as a model for employers who want to ensure an appropriate level of training for their employees. It should be emphasized, however, that implementing even identical practices to those described in the set does not guarantee that the employer will meet the requirements of Article 4, because, as mentioned above, the obligations under this provision depend on the context of the use of AI, the people against whom it is used and the competences of the employees who use AI in their work.
The Regulation also imposes information obligations on providers of general purpose AI models towards their users. Article 53 requires providers to prepare and regularly update technical documentation of the model. This documentation should include, among other things:
– Description of the model, its architecture and permitted use
– Details about the training process, data sources and optimization methods used
– Information about energy consumption and potential risks associated with using the model.
Suppliers based outside the European Union are required to appoint an authorised representative based in a Member State. This must be done before the general purpose AI is placed on the market in the Union. The authorised representative acts as the supplier’s proxy and is responsible for fulfilling the obligations related to the technical documentation.
Providers of AI models with systemic risk, i.e. models that have high impact capabilities assessed based on appropriate technical tools and methodologies, including indicators and benchmarks , and models recognized as such by a decision of the European Commission, have additional obligations under Article 55 of the AI Act.
These include:
– Conducting security tests to identify and minimize risks
– Assessment and reporting of AI incidents to supervisory authorities
– Ensuring a high level of cybersecurity for your models.
Prohibited practices
Article 5 of the AI Act specifies prohibited practices related to the use and introduction of AI into circulation. The purpose of this regulation is to protect fundamental rights against harmful actions of artificial intelligence. The aforementioned provision contains a closed list of prohibited practices.
The first one is the use of subliminal, manipulative or misleading techniques. This prohibition applies in situations where:
a/ The AI is intended to influence user behavior or has the effect of influencing user behavior;
b/ AI action causes the user to make a decision that he would not have made in other circumstances;
c/ AI activity may or does cause harm to the user or another person.
An example of such a practice could be the use of AI manipulating consumer behavior by making personalized offers that encourage them to overspend.
The second prohibited practice listed in the provision is the exploitation of a user’s weakness resulting from their age, disability, social situation or economic situation. Such actions are prohibited if their purpose is to influence the user’s behavior and cause or may cause serious harm. An example of such a practice is the use of algorithms that direct ads for high-interest loans to people in a difficult financial situation.
It is also prohibited to use algorithms for social scoring, i.e. assessing and classifying people based on their behavior, personal characteristics and personality traits, and assessing the risk of a specific natural person committing crimes. Scoring is prohibited if it leads to discrimination. The latter prohibition does not apply to assessments based on facts directly related to the criminal activity of a specific person.
Further prohibitions concern the use of AI to create databases for facial recognition, based on images from the Internet and CCTV cameras, and remote biometric identification in real time. The latter is exempted in the case of searching for missing persons, preventing terrorism and prosecuting serious crimes. Biometric categorization, i.e. the use of biometric data of people by AI to infer information about their race, political opinions, trade union membership, religious or ideological beliefs, sexuality or sexual orientation, is also prohibited.
The final prohibited practice is the use of AI to analyze emotions in workplaces and educational institutions. This ban does not apply if the AI system is to be implemented for medical or safety reasons.
The European Commission has issued Commission Guidelines on prohibited artificial intelligence practices established by Regulation (EU) 2024/1689 (AI Act) . It provides guidance on the practical application of Article 5 of the AI Act and may be a useful resource for entities wishing to comply with the regulations.
High Risk AI Systems
The AI Act introduced the concept of a high-risk AI system into legal circulation . An AI system is considered to be such if it is part of a product covered by certain regulations of EU law and is subject to control in connection with its introduction into service or if it is listed in Annex III to the Regulation, unless it does not pose a risk to health, safety and fundamental rights. High-risk AI includes, among others: certain biometric systems, systems managing critical infrastructure, systems making decisions about employment and educational assessment, as well as systems deciding on access to basic public and private services, managing migration and asylum and serving the administration of justice. High-risk AI is generally subject to the obligation to register in the appropriate database.
The primary responsibility of a high-risk AI system provider is to establish a risk management system. Such a system is a process that is implemented throughout the life of the high-risk AI and consists of:
– Identifying and analysing the risks that AI may pose to health, safety and fundamental rights;
– Estimating and assessing the risks that may occur when using AI in accordance with its intended use and in a reasonably foreseeable manner contrary to its intended use;
– Adopting appropriate measures to counter the risks posed by AI.
The supplier is obliged to conduct appropriate AI testing to detect risks and apply appropriate risk management measures.
High-risk AI vendors are also required to engage in data governance, ensuring that datasets for training, validation, and testing are appropriate, sufficiently representative, and, to the extent possible, error-free and complete for their intended purpose.
It is also necessary to prepare technical documentation for a high-risk AI system before it is placed on the market and to update it regularly. The technical documentation serves to demonstrate that the AI system complies with the requirements of the regulation. Its minimum content is included in Annex IV to the AI Act, although small and medium-sized enterprises can use a simplified documentation template.
High-risk AI must include a system for automatically recording events that pose a risk to health, safety or fundamental rights.
It is the responsibility of the supplier to ensure that high-risk AI meets the standards of transparency of operation. Transparency is to enable interpretation of the system results and their proper use. Such a system should be accompanied by an instruction manual containing, among other things, supplier data, system features, means of human supervision over the system, necessary hardware resources for using the system, the system life cycle, means of its maintenance and maintenance.
High-risk AI systems must be designed to be subject to human oversight to minimize risks to health, safety, and fundamental rights. The onus is on the provider to implement oversight measures. AI systems must be designed to meet standards of accuracy, robustness, and cybersecurity throughout their lifecycle.
Penalties
The AI Act provides for sanctions, in the form of fines, for violating the provisions of the regulation. Their precise regulation has been transferred to the competence of the Member States, therefore the exact procedures for their application may vary depending on the location of the AI provider and user.
In the event of a breach of the obligations arising from most provisions of the regulation, the perpetrator may be subject to an administrative penalty of up to EUR 15 million or 3% of the total annual global turnover of the previous year, depending on which amount is higher. However, a special sanction is provided for the breach of the provisions specifying prohibited practices. In this case, the penalty may reach EUR 35 million or 7% of the turnover.
As a rule, fines, according to the delegation, are imposed by the authorities of the Member States. However, AI model providers can also be fined by the European Commission with a fine of up to 15 million euros or 3% of turnover.
Summary
The AI Act is the first comprehensive regulation of AI at EU level. Key elements of the regulation include AI literacy, which is raising the competences of AI users, prohibited practices that may lead to violations of fundamental rights, and high-risk AI, which is subject to strict regulations on registration, testing and supervision.
In addition to establishing strict rules for high-risk AI systems, the AI Act also introduces enforcement mechanisms and heavy financial penalties for violations of the regulations – especially for prohibited practices that may lead to discrimination, manipulation or unauthorised surveillance.
Sources
Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act) Text with EEA relevance (OJ EU L 144, 2024, item 1689).
Living Repository of AI Literacy Practices – v. 31/01/2025
AI Literacy: A Framework to Understand, Evaluate, and Use Emerging Technology
Commission Guidelines on prohibited artificial intelligence practices established by Regulation (EU) 2024/1689 (AI Act)