By Bart Layton, CEO - AI Guardian
Fact Checked by Robin Hackney
The EU Artificial Intelligence Act is a significant piece of legislation by the European Union aimed at regulating AI technologies and ensuring businesses use AI responsibly.
We’re breaking down the draft EU Artificial Intelligence Act for you, highlighting the key dates and developments, and examining its impact on different sectors.
Key Takeaways
The Act classifies AI systems into four risk categories: unacceptable, high risk, limited risk, and minimal or no risk. Each category has its own set of rules to ensure these systems are safe and trustworthy.
Certain AI practices are completely banned under the Act. It also requires transparency, places tough requirements on providers of high-risk AI systems, and sets out strict penalties for those who don’t comply. The aim is to protect fundamental rights and ensure public safety.
Important Dates for the EU AI Act
July 12, 2024 – The AI Act was published in the Official Journal of the European Union. This serves as the formal notification of the new law.
Coming Up Next
Aug 1, 2024 – The AI Act enters into force on the 20th day following its publication in the Official Journal. The provisions and obligations of the act take effect over the following months and years, as noted below, according to Article 113 / Recital 179.
Feb 2, 2025 – Chapter I (general provisions, including AI Literacy provisions) and Chapter II (prohibitions on unacceptable risk AI) will apply.
May 2, 2025 – Codes of practice are to be ready, per Article 56 / Recital 179.
Aug 2, 2025 – Chapter III Section 4 (notifying authorities), Chapter V (general purpose AI [GPAI] models), Chapter VII (governance), Chapter XII (confidentiality and penalties) and Article 78 (confidentiality) will apply, with the exception of Article 101 (fines for GPAI providers).
Aug 2, 2026 – The remainder of the AI Act will apply, including Article 101 (fines for GPAI providers), with the following exceptions:
Aug 2, 2027 – Article 6(1) and corresponding obligations will apply to AI systems subject to EU Harmonization Legislation. Exemption ends for GPAI providers with models placed on the EU market before Aug 2, 2025. Note: certain operators of High-Risk AI systems and large scale IT systems established by EU legal acts are subject to exemptions ending in 2030.
Within six months, Chapters I and II, detailing prohibitions concerning AI systems that pose an unacceptable risk, will begin to be enforced.
After one year has passed since commencement, crucial provisions—including those addressing authorities responsible for notifications, oversight structures, and regulations specific to general-purpose AI models—will take effect.
Penalties intended for providers of GPAI are excluded at this juncture. Complete enforcement of the remaining components will roll out at the two-year mark post-enactment with Article 6(1) and associated responsibilities following three years thereafter.
This step-by-step approach is designed so stakeholders have sufficient time for adjustment in order to meet compliance demands as stipulated by these new directives within the European Union framework.
Previous EU Developments & Dates
The formulation of the EU AI Act was a multifaceted process that engaged a wide array of contributors over several stages. Beginning with the proposal from the European Commission in April 2021, significant developments included the establishment of a common position by the Council in December 2022 and setting forth of negotiating terms by the European Parliament in June 2023. These actions were critical steps leading to the ultimate endorsement of the AI Act by the European Council in May 2024.
Crucially, February 2024 marked an important step with introducing support mechanisms for enforcing this legislation through the inauguration of the European Artificial Intelligence Office. This entity played a central role during the implementation phase to ensure smooth application across member states.
2023
Significant strides were made toward the completion of the AI Act. The European Parliament established its position for negotiations on June 14th by a substantial majority—499 votes to 28 against and with 93 abstentions. Following this decisive endorsement, a tentative agreement between the Parliament and Council was reached on December 9th, marking an essential milestone in finalizing the Act.
These events highlight not only widespread agreement, but also underscore an urgent imperative for sweeping regulation of artificial intelligence within Europe’s borders.
2022
Discussions around the AI Act intensified with significant advancements made.
On February 2nd, the European Commission launched a new Standardisation Strategy to reinforce the Single Market and boost international competitiveness.
Several committees within the European Parliament released proposed amendments and draft views in March.
The French Presidency of the Council was influential by issuing conciliatory texts in both February and June that tackled pressing issues like obligations linked to high-risk AI systems as well as standards for transparency.
By September, the final committee endorsement came from the JURI (Committee on Legal Affairs) regarding their stance on the AI Act. This chain of activities led up to December 6th when there was an agreement on a joint position by the Council of EU members, this initiation steered us towards wrapping up legislative proceedings.
2021
The European Commission made a significant move by presenting the AI Act proposal on April 21st, which aimed to establish a cohesive framework for AI regulation across the EU. The proposal garnered considerable attention during its public consultation phase, receiving 304 submissions from various stakeholders.
Research commissioned by the European Parliament delved into biometric techniques application and provided vital insights that fed into refining legislative proceedings.
During July of that year, under Slovenia’s presidency at helm, a virtual gathering was convened centering around discussions about AI regulation along with ethical considerations and protecting fundamental rights. As part of progressing through the steps in the legislative process towards finalizing regulations within Europe’s internal market realm. Negotiations were led cooperatively between both internal market committee members as well as civil liberties representatives—with Brando Benifei and Dragoş Tudorache steering these efforts at negotiation forefronts—laying down groundwork critical for future dialogue and concessions made over subsequent years.
The Need for AI Regulation
As AI technologies advance rapidly, they present vast opportunities alongside significant dangers. The EU’s AI Act is designed to address these issues by promoting the safe and ethical deployment of AI systems that respect fundamental rights. Acknowledging potential threats to democracy, legal systems, and environmental integrity posed by high-risk AI applications, the EU aims to champion trustworthy AI both within its borders and globally.
In response to heightened public awareness around ethical concerns in technology usage, Europe has taken a step forward with targeted regulations on artificial intelligence through the introduction of the AI Fundamentals Act. This piece of legislation obligates compliance with essential rights protections and safety norms for fostering confidence in these systems’ reliability. Focused explicitly on managing risks inherent in powerful and influential AI models, this sweeping regulation carefully navigates between encouraging technological progress and maintaining security principles.
Risk-Based Classification of AI Systems
Utilizing a risk-based approach, the AI Act classifies AI systems into four distinct groups according to their possible influence and capacity for harm: unacceptable, high risk, limited, which are accompanied by respective regulatory obligations that guarantee responsible and safe utilization of AI technologies.
High Risk AI Systems
AI systems that can profoundly affect health, safety, or fundamental rights are categorized as high-risk AI systems. These types of applications arise in:
Critical infrastructure management
Educational environments
Vocational training programs
Vital services including healthcare and financial sectors
For example, risk AI systems deployed within the healthcare sector or used for financial transactions face stringent regulatory measures to prevent unnecessary risks. Similarly, AI technologies serving as essential elements of products’ safety features or acting as independent items requiring an evaluation by a third-party are also deemed high-risk.
Under the provisions of the AI Act, law enforcement uses particular scrutiny when it involves biometric recognition functions. Systems enabling ‘post-event’ biometric identification necessitate judicial consent. Meanwhile, ‘real-time’ remote identification through biometrics in public areas is limited to narrowly defined severe circumstances only. The intent behind such strict regulations is to counterbalance systemic risks while securing that fundamental human rights remain intact from violations by such high-risk AI infractions.
Limited Risk AI Systems
AI systems carrying a limited risk are primarily obligated to adhere to transparency measures. These obligations mandate that users must be clearly informed when they are interacting with AI-driven technologies, such as chatbots. It is essential for these systems to disclose through labeling whether the content—be it text, audio or video—is created by an AI rather than by humans. Transparency like this plays a significant role in cultivating trust and guarantees user awareness of the involvement of artificial intelligence.
In instances where individuals interact with chatbots or similar types of AI technology, it’s important that there is clear notification indicating their engagement with an artificial intelligence entity. Such transparent practices contribute significantly towards establishing trustworthy AI while ensuring all output produced by these intelligences is distinctly marked as non-human-generated content. This approach aims at mitigating potential risks stemming from opaque operations and promotes responsible application of ethical principles within the sphere of artificial intelligence usage.
Minimal or No Risk AI Systems
Under the AI Act, AI systems that present minimal or no risk, such as spam filters and AI-enabled video games, are subjected to less stringent regulatory requirements. This is due to their relatively low impact on safety, health or fundamental rights.
Despite being deemed less worrying in terms of potential hazards, these low-risk AI systems must still adhere to a structure that guarantees they do not embody any risks considered unacceptable.
Prohibited AI Practices
The AI Act strictly bans certain uses of artificial intelligence because they're just too risky. For example, any AI that uses subliminal messaging or manipulative tactics to mess with someone's decision-making is a big no-no. It's also illegal for AI to exploit people’s vulnerabilities, like their age, disabilities, or specific socioeconomic conditions. The idea is to protect folks from being unfairly influenced or targeted by these technologies.
Under this legislation, there is an explicit ban on social credit systems when these result in unwarranted discriminatory practices based on an individual’s societal conduct. Equally prohibited by the act are those AI-based assessment tools used to predict potential criminal acts which rely purely on profiling people according to personality characteristics without solid justification. Any system using biometrics technology for categorizing personal traits like ethnicity as well as inferring political stance or sexual orientation cannot be utilized legally within jurisdictions governed by this regulation. The effectiveness and relevance of these bans continue through regular annual evaluations conducted by the Commission.
Obligations for Providers of High-Risk AI Systems
To ensure the safety and reliability of high-risk AI systems, providers are required to fulfill a range of particular responsibilities. They must undertake conformity assessment procedures, establish quality management systems, and create an EU declaration of conformity for their AI system(s). It is imperative that they provide their contact information in an easily accessible manner on either the actual high risk AI system itself, its packaging or within any accompanying documents.
Technical Documentation
Maintaining a thorough and current technical dossier is essential for adhering to EU regulations. Such documentation must encompass extensive details regarding the AI system’s components, its functionality, and its creation process. Even smaller entities such as SMEs and startups are expected to furnish elements of this documentation in an abridged form to lessen their compliance load.
The records mentioned in Article 18 of the EU AI Act need to be part of this documentation stack, guaranteeing exhaustive recording of every aspect related to the high-risk AI systems. The precision in these documents is critical for ensuring both transparency and accountability during both development and deployment stages of risk AI systems.
Human Oversight
The AI Act underlines the importance of human oversight in supervising and controlling high-risk AI systems. It mandates operators to maintain the capability to intervene and, if needed, shut down any such AI system to mitigate threats to health, safety or fundamental rights. This supervision should be consistent with the level of risk presented by the use context of each high-risk AI system.
Post-Market Monitoring
Obligations for providers include:
Maintaining constant vigilance over high-risk AI systems
Upholding perpetual conformity with established safety criteria
Recording and scrutinizing any events or dysfunctions to avert recurrence and enhance the reliability of the AI system
Implementing an effective risk management framework that remains active during every phase of the life span of the AI system, pinpointing predictable threats to health, safety, or fundamental rights.
General Purpose AI Models
General-purpose AI models, which can be used across various applications, are subject to specific obligations under the AI Act. Providers of these models must disclose information about the training content and ensure compliance with EU copyright law. This transparency is crucial in maintaining the trustworthiness of AI systems and ensuring that they adhere to ethical standards.
The Act also mandates that providers conduct model evaluations and adversarial testing to identify potential risks and vulnerabilities. These models, if trained with significant computing power, are presumed to carry systemic risks and are therefore subject to stringent regulations.
Additionally, the EU plans to establish a dedicated forum for cooperation with the open-source community to develop best practices for the safe development and use of open-source AI models.
Enforcement and Penalties
On February 21, 2024, the European Commission instituted the European Artificial Intelligence Office to oversee adherence to the AI Act. Collaborating closely with national authorities, this office ensures that regulations under the Act are applied consistently and effectively throughout EU member states. It wields investigative powers and can levy administrative sanctions for any infringements of the AI Act.
Companies found violating compliance may face punitive damages as high as €30 million or up to 6% of their annual global revenue, whichever is greater. These harsh penalties highlight the critical need for companies to comply with provisions laid out in the AI Act by creating and implementing artificial intelligence systems in a responsible manner. Enforcement measures include compelling non-compliant entities to undertake remedial actions on their AI systems.
Future-Proofing AI Legislation
The AI Act has been crafted to be adaptive and resilient, allowing its regulations to evolve alongside rapid advancements in technology. This flexibility is essential for the act to remain effective and pertinent as AI technologies continue to advance. Within this framework, the goal of the AI Act is twofold: it strives not only for fostering innovation but also for ensuring that AI systems are developed within a regulatory compliant structure.
To facilitate these objectives, member states must create a minimum of one functional regulatory sandbox specifically designed for AI. These controlled experimental spaces play a pivotal role in nurturing innovative development while simultaneously upholding compliance with stipulations set out by the act – creating an environment where AI systems intended can flourish safely under supervision.
Next Steps for the AI Act
The next steps for the implementation of the AI Act are as follows:
The act enters into force on August 1, 2024 - the 20th day after it was published in the Official Journal of the European Union.
The first provisions and obligations - regarding AI Literacy requirements and prohibited uses of AI - will be effective 6 months thereafter.
The remaining provisions and obligations take effect in a phased progression over the following 30 months, with some noted exceptions extending into 2030.
In anticipation, during this preparatory stage, an initiative known as the AI Pact is being organized by the Commission to invite developers to adhere voluntarily to principal obligations of the AI Act ahead of its enforced date. This effort seeks not only to promote a culture that respects regulatory compliance, but also ensures a seamless transition into this forthcoming legal structure.
By August 2, 2026, the infrastructure for governance of artificial intelligence across member states underpinned by EU law is to be operational with these arrangements set forth by the AI Act—a landmark event in regulation related to artificial intelligence technologies in Europe’s history.
Summary
The EU AI Act represents a comprehensive effort to regulate AI technologies, ensuring they are used in safe, ethical, and trustworthy ways. By classifying AI systems based on risk, prohibiting harmful practices, and imposing meaningful obligations on high-risk AI providers, the Act aims to protect fundamental rights while promoting innovation. The role of the European Artificial Intelligence Office in enforcing these regulations and the Act’s future-proof approach ensures that it remains relevant as AI technologies evolve. As the Act’s implementation unfolds, it sets a precedent for AI governance globally, balancing innovation with the protection of human rights.
Frequently Asked Questions
When was the AI Act published in the Official Journal of the European Union?
The AI Act was published in the Official Journal on July 12, 2024.
What are the penalties for non-compliance with the AI Act?
Should a company not adhere to the regulations stipulated by the AI Act, it may incur substantial fines amounting to either €35 million or 7% of its worldwide annual revenue for that year, depending on which is higher.
Hence, it is imperative to fully comply with the AI Act in order to escape such severe financial penalties.
What is the role of the European Artificial Intelligence Office?
The function of the European Artificial Intelligence Office encompasses supervising the execution and enforcement of regulations as stipulated by the AI Act. It works in partnership with national authorities and has the authority to levy penalties for instances of non-compliance.
By doing so, it facilitates the governance and ethical deployment of artificial intelligence technology.
What are high-risk AI systems?
AI systems that fall into the high-risk category are characterized by their considerable influence on aspects like health, safety, or fundamental rights. These systems play a pivotal role in essential sectors including critical infrastructure, healthcare, and financial services.
Exercise vigilance when engaging with these AI systems.
How does the AI Act ensure future-proofing?
The AI Act is designed to remain applicable and efficient over time by incorporating provisions that allow its regulations to evolve alongside technological progress, thereby guaranteeing its continued relevance.
It mandates that member states set up regulatory sandboxes for AI. These are frameworks intended to foster innovation while ensuring adherence to the stipulations of the Act.
Comments