Following its approval on May 21, 2024, Regulation 2024/1689 of the European Parliament and of the Council, June 13, 2024, laying down harmonized rules in the field of artificial intelligence (Artificial Intelligence Regulation), was published in the Official Journal of the European Union on July 12, 2024.
As the most ambitious regulation approved to date on this subject worldwide, the Artificial Intelligence Regulation aims to become a global regulatory standard, which will condition the economic and social development of the coming years.
Following its publication on July 12, 2024 in the Official Journal of the EU, the IA Regulation will enter into force on August 1, 2024 and will be applicable progressively.
The general rule is that the IA Regulation will be applicable 24 months after its entry into force, i.e. from August 2, 2026.
However, some of its provisions are applicable at different deadlines:
- Prohibitions on certain AI-related practices will be applicable as of February 2, 2025.
- The provisions relating to notified bodies, general AI systems but involving systemic risks, the AI governance system in Europe and a large part of the penalties will be applicable as of August 2, 2025.
- The regulation of certain high-risk AI systems will be applicable as of August 2, 2027 (those that are safety components of certain products or constitute in themselves such products characterized by requiring a safety assessment for their commercialization or putting into service, such as machines, toys, elevators or medical devices).
The AI Regulation approaches artificial intelligence from the point of view of the risks it may pose to individuals and seeks to adapt the type and content of the obligations in accordance with the scope and severity of the risks it may pose.
To this end, the AI Regulation adapts the type and content of its rules to the intensity and scope of the risks posed by the AI systems to which it applies:
a. Prohibited AI practices: The AI Regulation considers certain AI practices to have an unacceptable risk to the EU and, in particular, the AI Regulation prohibits the following practices:
- Subliminal, manipulative or deceptive techniques
- Exploitation of people’s vulnerabilities
- Classification of persons by citizen score
- Prediction of potential criminality
- Massive facial recognition databases
- Inference of emotions in educational or work settings
- Biometric categorization for inferring special category data
- Real-time remote biometric identification in public places for law enforcement purposes
b.
High-risk IA systems: AI systems classified as high risk are subject to enhanced obligations.
The Regulation classifies as a high-risk IA system:
- Regulated products (those that are intended to be used as a safety component in certain regulated products or the IA system itself is a regulated product as identified in Annex I of the IA Regulation).
- Areas of special relevance or sensitivity (thoseidentified as of special relevance or sensitivity in Annex III of the IA Regulation, which includes certain uses of IA in areas such as biometrics, educational or work centers, financial services, critical infrastructure, access to essential services, etc.).
The IA Regulation establishes the following requirements for IA systems considered high risk:
- Establishment of a risk management system for the identification and analysis of known and reasonably foreseeable risks, as well as for the adoption of appropriate measures to address such risks.
- Subjection to data governance and management practices that include fit-for-purpose practices for training, validation and test data sets.
- Preparation of technical documentation prior to market introduction or commissioning, which must be kept up to date and prepared in such a way as to demonstrate that the IA system complies with the requirements of the IA Regulation.
- Record keeping with measures that technically allow automatic recording of events (log files) throughout the life of the AI system.
- Communication of information to deployers in a transparent manner so that deployers can correctly use and interpret the output information from the IA system.
- Inclusion of human surveillance in order to prevent or reduce risks that may arise in the use of the AI system by providing the AI system with appropriate human-machine interface tools.
- Development with appropriate levels of accuracy, robustness and cybersecurity and operating consistently in these respects throughout their lifecycle (e.g., by establishing technical and organizational safeguards against errors, failures, biases or security breaches).
c.
Certain IA systems: The IA Regulation also establishes transparency obligations for providers and deployers of certain IA systems, which have to be fulfilled at the first interaction or exposure:
- AI systems intended to interact with people (e.g., chatbots or virtual assistants): Providers and those responsible for deployment should ensure that persons exposed to the use of the AI system are informed that they are interacting with an AI system, unless this is obvious taking into account the circumstances and context of use.
- AI systems that generate synthetic audio, video or text content: Vendors should ensure that output results are marked up in a machine-readable format and it is possible to detect that they have been artificially generated or manipulated.
- AI systems that generate synthetic audio, video or text content: Those responsible for the deployment must make it public that a content constituting an ultra-counterfeit has been artificially generated or manipulated.
d.
General purpose AI models: A general-purpose AI model will be classified as a general-purpose AI model with systemic risk if it has high-impact capabilities or if it has certain capabilities. The AI system will be presumed to have high-impact capabilities when it accumulates a certain amount of computation used for training (in particular, when the accumulated amount of computation measured in floating point operations is greater than 10^25).
Various obligations of suppliers of AI models for general use are established:
- prepare and keep updated the technical documentation of the general-purpose AI model required by the AI Regulation;
- make available to suppliers information on AI systems that they intend to integrate into a general-purpose AI model;
- evaluate compliance models with protocols and tools that reflect the state of the art and assess and reduce potential systemic risks that may arise from the development;
- monitor, document and report to the IA Office or competent national authorities information on serious incidents and ensure that an adequate level of cybersecurity protection is in place
Finally, in order to ensure the effectiveness and compliance with the obligations contained in the IA Regulation, the Member States are empowered to create one or more supervisory entities.
In Spain, the Spanish Agency for the Supervision of Artificial Intelligence has been created, which will be the main supervisory authority at national level and has the power to impose sanctions. In this regard, it should be noted that the IA Regulation establishes a severe sanctioning regime, with fines, in the most serious cases, of up to 35 million euros or 7% of the total annual worldwide turnover, whichever is higher.
In view of the above, companies should start analyzing the impact that the aforementioned regulation has on the IA systems already implemented or in the implementation phase due to the complexity of the obligations and other provisions contained therein.
This publication does not constitute legal advice.
_____
How can we help you from LAW4DIGITAL?
At LAW4DIGITAL we are lawyers specialized in digital businesses. We provide comprehensive legal advice to digital companies. We help you with online legal advice.
We will keep you updated on digital business. In any case, you can contact us by mail addressed to hola@law4digital.comby calling (+34) 931 444 820 or by filling out our form at law4digital.com.
We are waiting for you in the next post!
Law4Digital team.