UPDATE: June 14, 2023
The AI Act takes a step forward as the European Union’s parliament voted to push forward draft legislation.
Key components include:
Bans on remote biometric surveillance (real-time in public spaces as well as harvesting content for facial recognition software)
Bans on predictive policing systems
Require companies to design AI models in a way that prevents them from creating illegal content
Require companies to publish summaries of the copyrighted data
Noncompliance could impose fines of up to 6% or 7% of a company’s global revenue.
The next step is negotiations amongst representatives from the parliament, EU member states and the European Commission. Their goal is to reach a deal on the proposed law before the end of this year.
With the rapid pace regulation is taking, it's important for companies to begin taking steps now, as AI use is just beginning to ramp, to institute proper governance and controls.
The European Union (EU) is proposing a new regulation to govern the development and use of artificial intelligence (AI). The proposal, which was published in April 2021, aims to ensure that AI is used safely and responsibly, and that it benefits society as a whole.
The proposal defines AI as "a software system that can, for a given set of human-defined objectives, generate outputs or make decisions that are not pre-determined by the software developer." This definition is broad enough to encompass a wide range of AI systems, from simple chatbots to complex self-driving cars.
The proposal divides AI systems into three main categories:
High-risk AI systems: These are AI systems that pose a significant risk to human health, safety, or fundamental rights. Examples of high-risk AI systems include AI systems that are used for:
Automated decision-making about people's access to essential services, such as housing or employment
Critical infrastructure, such as energy or transport
Law enforcement or criminal justice
Moderate-risk AI systems: These are AI systems that pose a moderate risk to human health, safety, or fundamental rights. Examples of moderate-risk AI systems include AI systems that are used for:
Low-risk AI systems: These are AI systems that do not pose a significant risk to human health, safety, or fundamental rights. Examples of low-risk AI systems include AI systems that are used for:
High-risk AI systems will be subject to the most stringent requirements under the proposal. These requirements include:
Prior market authorization: Before a high-risk AI system can be placed on the market, it must be authorized by a competent authority in the EU. The authorization process will involve a technical assessment of the AI system to ensure that it meets the requirements of the proposal.
Conformity assessment: High-risk AI systems must be subject to conformity assessment by a notified body. A notified body is an independent organization that has been accredited by a competent authority to assess the conformity of products to technical requirements.
Technical documentation: The developer of a high-risk AI system must provide technical documentation that describes the system and how it meets the requirements of the proposal.
Labeling: High-risk AI systems must be labeled to indicate that they are high-risk. The label must also include information about the risks associated with the system and how to mitigate those risks.
Monitoring: The developer of a high-risk AI system must monitor the system after it has been placed on the market to ensure that it continues to meet the requirements of the proposal.
Moderate-risk AI systems will be subject to some of the same requirements as high-risk AI systems, but the requirements will be less stringent. For example, moderate-risk AI systems will not need to be authorized by a competent authority, but they will still need to be subject to conformity assessment and labeling.
Low-risk AI systems will not be subject to any specific requirements under the proposal. However, developers of low-risk AI systems are still encouraged to follow the principles of ethical AI, which are set out in the proposal.
The AI regulation will be enforced by the competent authorities in each EU member state. The competent authorities will have the power to investigate suspected breaches of the regulation and to take enforcement action, such as fines or injunctions.
The AI regulation is expected to have a significant impact on the development and use of AI in the EU. The regulation will make it more difficult and expensive to develop and deploy high-risk AI systems. However, the regulation is also expected to encourage the development of safe and responsible AI systems. The regulation is also expected to boost the EU's competitiveness in the global AI market.
The AI regulation is a significant step forward for the EU in regulating AI. The regulation is comprehensive and covers a wide range of AI systems. The regulation is also ambitious, setting high standards for the development and use of AI. The regulation is expected to have a positive impact on the development of safe and responsible AI in the EU.