After three intense days of negotiations, the Commission welcomes the political agreement reached between the European Parliament and the Council on the Artificial Intelligence Act (AI Act), proposed by the Commission in April 2021, an interim agreement that marks a milestone in the regulation of AI globally. This landmark proposal aims to establish harmonized standards to ensure safety and respect for fundamental rights in the use of AI systems in the European market.
In the words of the President of the European Commission, Ursula von der Leyen, “today’s agreement focuses regulation on identifiable risks, provides legal certainty and paves the way for innovation in trusted AI. By ensuring the safety and fundamental rights of individuals and businesses, the Act will support the people-centric, transparent and accountable development, deployment and adoption of AI in the EU”.
The AI Act, which contains new rules applicable uniformly in all Member States, proposes a risk-based approach to regulating AI, where the rules become stricter as the potential risk of harm to society increases. Systems are classified into categories of minimum risk, high risk, unacceptable risk and specific transparency risk, establishing specific requirements for each category:
Low risk: Most artificial intelligence systems fall into the minimal risk category. Low-risk applications, such as recommendation systems and AI-based spam filters, will enjoy certain freedoms and exemptions, as they pose little or no risk to citizens’ rights and security. However, companies may choose to follow additional codes of conduct on a voluntary basis for these systems.
High risk: Artificial intelligence systems identified as high risk must meet rigorous requirements, including risk mitigation measures, high-quality data sets, activity logging, detailed documentation, clarity of information for the user, human oversight, and high standards for robustness, accuracy, and cybersecurity. Regulatory testing environments will facilitate responsible innovation and the development of safe AI systems.
Unacceptable risk: AI systems that pose a clear threat to the fundamental rights of individuals shall be prohibited. This includes systems that manipulate human behavior to circumvent free will, such as toys that encourage dangerous behavior in children, government or corporate “social scoring” systems, and certain predictive policing applications. In addition, some uses of biometric systems, such as emotion recognition in the workplace and real-time remote biometric identification for law enforcement purposes in public spaces, are prohibited, with limited exceptions.
Specific transparency risk: When using AI systems such as chatbots, users must be aware that they are interacting with a machine. Deepfakes and other AI-generated content should be labeled as such, and users should be informed when biometric categorization or emotion recognition systems are used. In addition, suppliers must design systems so that synthetic content is legibly and detectably marked as artificially generated or manipulated.
Penalties
To ensure proper compliance, fines for violations of the IA Act will be imposed on companies that fail to adhere to the regulations. In this regard, fines would range from €35 million or 7% of global annual turnover (whichever is greater) for violations of prohibited AI applications, €15 million or 3% for violations of other obligations and €7.5 million or 1.5% for providing incorrect information. More proportionate limits are provided for administrative fines for SMEs and start-ups in case of violations of the AI Law.
Entry into force and next steps
The interim agreement establishes that the IA Law will apply two years after its entry into force, with some exceptions: the prohibitions will enter into force after 6 months, and the general rules on IA will apply after 12 months.
To cover the period before the Act becomes generally applicable, the Commission will launch an AI Compact, inviting AI developers from Europe and around the world to voluntarily commit to meeting key obligations ahead of the legal deadlines. In addition, the European Union will continue to work in international fora such as the G7, OECD, Council of Europe, G20 and the United Nations to promote standards for reliable AI.
It is expected that, following the signing of the provisional agreement, work will continue to finalize the details of the new regulation, which will then be submitted to the representatives of the member states for approval.
The European Union’s Artificial Intelligence Act represents a crucial step towards the global regulation of AI, establishing rules that seek to ensure safety, respect for fundamental rights and the promotion of innovation in this field. Its implementation could set a precedent for future regulations worldwide, consolidating European leadership in the formulation of technology policies.
Source: https://ec.europa.eu/commission/presscorner/detail/en/ip_23_6473
This publication does not constitute legal advice.
_____
How can LAW4DIGITAL help you?
At LAW4DIGITAL we are lawyers specialized in digital business. We provide comprehensive legal advice to digital companies. We help you with online legal advice.
We will keep you updated about digital business. In any case, you can contact us by sending an email to hola@law4digital.com, calling (+34) 931 444 820 or filling out our contact form at law4digital.com.
We look forward to seeing you in the next post!
Law4Digital Team.