The EU AI Act has been agreed upon by the European Parliament and the European Council, marking a pivotal moment in the governance of Artificial Intelligence (AI) within Europe.
The EU AI Act serves as a cornerstone in shaping the future of AI, ensuring that its development and deployment align with the core values of safety, ethics, and transparency across the European Union. Learn more about it below 👇
🚨 Update: Publication in the Official Journal
On July 12, 2024, the European AI Act was published in the Official Journal of the European Union. The Act will gradually come into force starting from August 1, 2024, and will be fully applicable on August 2, 2026. Key dates to note include:
February 2, 2025: Ban on AI systems deemed to pose an unacceptable risk (e.g., social scoring systems, biometric categorization, facial recognition databases, emotion recognition systems).
August 2, 2025: Provisions regulating general-purpose AI systems.
August 2, 2026: Full applicability to high-risk AI systems designated in Annex III, such as those used in recruiting, managing staff, biometrics, and access to services.
August 2, 2027: Applicability to high-risk systems categorized under Annex I, including medical devices, machinery, radio equipment, toys, and motor and agricultural vehicles.
Additionally, the European Commission has established the European AI Office to be the center of AI expertise across the EU. It will play a key role in implementing the AI Act, fostering the development and use of trustworthy AI, and promoting international cooperation.
The road to the EU AI Act has been a journey marked by significant milestones in the world of AI. This journey began as AI technologies started permeating every aspect of our lives, from healthcare to transportation. The EU’s response to these advancements was initially in the form of guidelines and recommendations, but the growing influence of AI called for more robust governance.
The AI Act is a response to this need, emerging from a background of thoughtful deliberation and previous directives that sought to balance innovation with ethical considerations.
The new artificial intelligence (AI) regulation in the EU, known as the AI Act, is a groundbreaking law that sets rules for the use and development of AI across Europe. Its main goal is to ensure AI systems are safe and respect fundamental rights like privacy and non-discrimination. This law is significant because it’s one of the first comprehensive attempts to regulate AI at such a large scale.
The European Commission has unveiled the AI Office, established within the Commission. The AI Office aims to enable the future development, deployment, and use of AI in a way that fosters societal and economic benefits and innovation while mitigating risks. The Office will play a key role in the implementation of the AI Act, especially in relation to general-purpose AI models. It will also work to foster research and innovation in trustworthy AI and position the EU as a leader in international discussions.
The AI Act applies mainly to “high-risk” AI systems. These are AI applications used in critical areas like: healthcare; education; law enforcement; and other public services.
The Act sets strict rules for these systems, like needing risk-mitigation measures and human oversight. However, it gives a pass to AI uses considered low risk, such as spam filters or AI used in non-critical domains.
The guidelines under the EU AI Act focus on transparency, ethical use, and fundamental rights.
AI systems must: be transparent, meaning companies must inform people when they’re interacting with AI (like chatbots); label AI-generated content, like deepfakes; and assess how their AI affects people’s rights, especially in essential services like banking and insurance.
While there is a cohesive effort at the EU level to regulate AI, individual member states have also been formulating their own strategies, reflecting their unique priorities and contexts.
See how different EU countries have been handling artificial intelligence in the EU here →
The EU AI Act represents a major legislative move, establishing comprehensive guidelines for AI usage across member states. Its primary goal is to secure AI systems, safeguarding fundamental rights and promoting trustworthy AI development.
Businesses operating in the EU must adhere to these regulations, involving rigorous assessment procedures for high-risk AI systems. This includes ensuring data quality, transparency, and oversight mechanisms. See here for more on the European AI strategy.
AI systems that are categorized as posing an unacceptable risk will be prohibited under the EU AI Act. These systems are deemed hazardous to individuals and include:
However, the Act does make provisions for certain exceptions, primarily for law enforcement purposes. Real-time remote biometric identification can be used in a limited scope, specifically for grave cases. Additionally, post-event remote biometric identification, which is used after a delay, is permitted for investigating serious criminal offenses, but only with prior judicial authorization.
These regulations are part of the EU’s effort to balance the advancement of AI technology with the protection of individual rights and safety.
AI systems that are determined to have a potentially negative impact on safety or fundamental human rights are categorized as high risk under the EU AI Act. These high-risk AI systems are subdivided into two distinct groups:
Every AI system classified as high risk will undergo a thorough evaluation process before being allowed on the market. Furthermore, their performance and compliance with regulations will be continually monitored throughout their operational life.
This structured approach towards high-risk AI systems is part of the EU’s broader strategy to ensure that AI development and deployment are conducted in a manner that is safe and respects the rights and freedoms of individuals. For a more comprehensive understanding of these classifications and regulations, it’s advisable to refer to official EU documentation or legal analyses on the subject.
👀 See How to Comply for high risk AI Systems under the EU AI Act →
AI systems classified as having limited risk are required to adhere to basic transparency measures. These measures are designed to ensure users can recognize when they are interacting with AI and make informed choices about their continued use of these applications.
Particularly, this includes AI-generated or manipulated content like images, audio, or video, such as those created by deepfake technology. The goal is to foster an environment where users are aware of AI involvement, allowing them to make more conscious decisions regarding their engagement with these technologies.
Applications like spam filters and video games are considered to have minimal risk. Therefore, they are not subjected to additional regulatory oversight.
In the context of the EU AI Act, both general-purpose and generative AI systems, including platforms like ChatGPT, are subject to specific transparency obligations. These requirements include:
Moreover, AI models that are more advanced and have a significant impact, such as GPT-4, are required to undergo extensive evaluations. In the case of any serious incidents arising from these systems, it is mandatory to report these incidents to the European Commission. This is part of the broader effort to monitor and regulate AI systems that could potentially pose systemic risks.
These measures are in place to ensure transparency and accountability in the use of AI, particularly in instances where these technologies have a wide-reaching impact or pose potential risks. For further details on these regulations and their implications, it’s recommended to review the official documentation or authoritative sources on the EU AI Act.
The EU AI Act establishes a comprehensive set of compliance measures for AI systems deemed high-risk, covering various stages from design and implementation to post-market introduction. These regulations encompass:
While AI systems identified as having limited risk are not subjected to the same stringent compliance checks, such as conformity assessments or product safety reviews, they are still evaluated based on similar criteria to ensure they meet the necessary transparency and safety standards.
These regulatory requirements are integral to ensuring that high-risk AI systems operate safely, ethically, and transparently, aligning with the broader objectives of the EU AI Act to safeguard user rights and public safety. For a deeper understanding of these compliance requirements, it’s advisable to consult the official text of the EU AI Act or related legal resources.
The implementation timeline of the EU AI Act, while not set to a specific date, is expected to be fully operational by 2026. This timeline reflects the need for a gradual but comprehensive adoption process, giving businesses and organizations sufficient time to understand and adapt to the new regulations.
It’s important to note that the period leading up to 2026 will likely see a phased implementation, with certain aspects of the Act coming into force at different stages. Businesses, especially those in high-risk sectors, should start assessing their AI systems and processes now to ensure a smooth transition to compliance. Ongoing updates and guidance from EU regulatory bodies are expected to assist in this preparatory phase.
The EU AI Act’s regulations cast a wide net, encompassing all entities involved in providing AI services within the EU market. This includes not only AI developers and providers based within the EU, but also those situated outside the EU, provided their AI systems are used in the EU market.
This global reach is significant as it implies that any business, regardless of its location, must comply with the Act if its AI services impact EU citizens or operations in EU countries. For multinational corporations, this means adherence to the Act’s standards even if their headquarters are outside the EU. Startups and smaller companies, particularly those aiming to enter the EU market, must also be mindful of these regulations and integrate compliance into their development and deployment strategies.
Entities found non-compliant with the EU AI Act regulations will face substantial financial penalties, reflecting the seriousness with which the EU regards AI governance.
These fines are structured to be proportionate to the size and turnover of the entity, ensuring that penalties are significant but fair. For minor infringements, fines can be as low as €7.5 million or 1.5% of the annual turnover, which can still represent a significant financial burden for many companies.
In cases of more serious breaches, the fines can escalate to €35 million or up to 7% of the global annual turnover, underscoring the potential financial risks of non-compliance. Beyond financial penalties, non-compliance could also lead to reputational damage, loss of consumer trust, and potential legal challenges. It’s crucial for entities to understand the full scope of these consequences and establish robust compliance mechanisms.
For a more detailed understanding of the AI Act and its implications, refer to the EU Commission’s comprehensive Q&A: EU Commission Q&A.
The EU AI Act is a significant step towards regulating AI in a manner that balances innovation with ethical considerations. Its impact extends beyond Europe, setting a global precedent for how AI can be governed responsibly. As this field continues to evolve, it is crucial to keep the dialogue open and engage various stakeholders in shaping the future of AI governance.