top of page
c837a6_3f53cdbff83b47b19a97bd35c767333c~mv2.webp

Ready for Regulation: Preparing Your Business for the EU AI Act

In our recent webinar on the EU AI Act, we delved into the nuances of what is poised to be a transformative piece of legislation for AI governance globally, much like the GDPR reshaped data privacy norms. Below you will find a replay of the webinar, a summary and the full transcript.



For transparency, ChatGPT was used to write the summary and streamline the transcript (removing speaker name and timestamps)


EU AI Act Webinar Summary

As the founder and CEO of AI Guardian, Chris Hackney's aim was to unravel the complexities of the EU AI Act, emphasizing its extraterritorial scope, risk-based regulatory framework, and the stratification of AI systems into prohibited, high-risk, limited risk, and minimal risk categories.


Understanding and preparing for the EU AI Act requires a proactive and informed approach. To navigate this landscape, it's imperative to start with a solid grasp of your organization's AI systems. Mapping out these systems and classifying them according to the Act's criteria will lay the groundwork for a comprehensive compliance strategy. This initial step is crucial in identifying where your efforts need to be concentrated to meet the Act's requirements.


Enhancing data governance is another cornerstone of compliance. In an era where data is both a valuable asset and a potential liability, ensuring robust mechanisms for data collection, storage, and usage is paramount. This not only aligns with the EU AI Act's stipulations but also fortifies your defenses against breaches and misuse.


Adherence to emerging technical standards will be a key factor in ensuring compliance. While the Act provides a solid framework, the specifics of compliance, including the forms and documentation required, will evolve. Staying abreast of these developments and integrating them into your operational practices is essential.


The regulatory environment is also set to become more nuanced, with individual EU member states likely introducing their own AI governance frameworks. It's vital to consider these regional specifics when crafting your compliance strategy, ensuring that your AI applications meet both the overarching EU requirements and those of individual jurisdictions.


Preparing for the EU AI Act isn't just about meeting regulatory demands; it's an opportunity to embed responsible AI practices into the fabric of your operations. To this end, establishing an AI governance committee, issuing a clear AI policy, and setting project approval criteria are foundational steps. These measures not only aid in compliance but also in harnessing the transformative potential of AI responsibly and ethically.


Looking ahead, the landscape of AI regulation will continue to evolve, not just within the EU, but globally. Anticipating and adapting to these changes is essential for maintaining a competitive edge and ensuring that AI is used in a manner that is beneficial and sustainable. This is a journey that requires diligence, foresight, and a commitment to excellence in AI governance.


In closing, I encourage businesses to view the EU AI Act not as a hurdle, but as a catalyst for adopting best practices in AI development and deployment. By taking proactive steps today, we can navigate the complexities of compliance, mitigate risks, and unlock the full potential of AI technologies.


EU AI Act Transcript


Welcome to today's webinar from AI Guardian. My name's Chris Hackney. I'm the founder and CEO of AI Guardian. Our mission in this important space is to help companies with their governance and compliance, and there's no more important subject than today's subject on the EU AI Act, which is going to have a major impact on overall governance efforts of companies across the world. Very similar to GDPR and other legislation that came out of the EU, so today's webinar is focused on that.


We're going to take a deep dive into the EU AI Act, its core components, what you need to know about it, what you need to do to be prepared for it today and to get ready for it. It's part of our ongoing webinar series on AI regulation. So we hope you check out our other series webinars and materials at aiguardianapp.com. That will also be in the last slide if you didn't write that down quickly as well.


Let's get started. What we're going to go through today is preparing your business for the EU AI Act. It's by far the leading global regulation on AI, and it's coming pretty quickly this year and into 2025. We will help you understand the core principles in the risk-based regulatory approach that the EU Parliament has taken in putting this together. We'll define AI system types because this is a risk-based regulatory approach - it's defined by the type of system and the use cases of it. There are four classifications:

  • prohibited use cases,

  • high risk,

  • limited risk, and

  • minimal risk.

Each of those comes with their own compliance requirements and components. We think here at AI Guardian that the EU has done a smart job really tying regulation to the risk profile of models in their applications. And we'll talk through what those compliance requirements are.


We'll also talk about enforcement timelines and penalty structure. Like I said, this is coming into enforcement this year in 2024. It will be in heavy enforcement in 2025. We'll talk about that structure and timeline.


And then, finally, for those who are managing their own businesses and their own AI integration, we'll talk about what you need to do to prepare your business for today and importantly, for AI integrations.


Please understand, while we talk a lot about generative AI today, a lot of this has coverage over subcomponents of AI - machine learning, NLP (natural language processing) are coming under heavy scrutiny given the advancements on the generative AI front and are inclusive here. So we'll cover these today. We'll leave time for some questions at the end. So if you do have questions, please put them in the chat, and we'll cover those as we go along, or at the end and time permitting.


In 2023, we had a busy year on the regulation front. There were over a thousand AI laws and regulations passed or proposed globally. About 17% of those were actually passed at territory, state, and country levels. For us, in tech, this is coming much quicker than it has in other previous tech waves. Governments are extremely serious and nervous about generative AI and what it can do, and they're trying to get ahead of it as it moves forward.


There has been nothing more important, and nothing moving faster than the EU AI Act. For Europe, this is an unprecedented pace; it took them six months or less to codify this from the beginning of negotiations through final parliamentary approval. It should be published in the official journal this quarter, in Q2 of 2024, which means initial enforcement will start for prohibited use cases at the end of the year, and full enforcement will start in 2025.


So the core principles ... what is the EU AI Act trying to accomplish? It's actually trying to accomplish some really good things. It's trying to provide AI developers and deployers with clear requirements and obligations regarding the use cases of the AI that they're creating or using. There's a lot of confusion out there about what should I do in governance around these subjects. And when you have a Wild West environment, people are really skittish to slow down their own efforts. And if there's not rule sets in place of what should be required. So it's seeking to streamline the administrative and financial burdens and requirements into very clear instructions, particularly for larger enterprises, while keeping minimal onus or requirements on small and medium-sized businesses and enterprises. They're really focused on the biggest players, the ones who can have the greatest impact.


It is a comprehensive legal framework on AI. It is extraterritorial just like GDPR. That means it has governance over any company anywhere in the world. If those companies are trying to function and market their models and applications in the EU territories. So if you do business in EU territory, no matter where you're headquartered, you need to conform to these requirements. So overall, it's laying out those ground rules. It does a really good job on laying out the fundamental rights, safety, and ethical principles.


What does it specifically enable? First, it addresses specifically created by AI applications. This is a new technology with its own risks inherent into it, especially on some of those ethical principles. So it goes after that and tries to lay out a framework. It prohibits certain AI practices that universally there's a lot of concern about with the new technology and the application and so they've laid those out very clearly. There's an accelerated timeline for 2024 on prohibiting those practices.


It also determines a list of high-risk applications where there can be high positive impact but in certain cases high negative societal impact. They lay out what those high-risk applications are, and they set clear compliance requirements for those systems and what you need to do to deliver to the EU office on this. There are required conformity assessments as a part of that we'll talk about later. There's also Fundamental Rights Impact Assessments (FRIAs). There's a series of steps, both internal and external, that companies are going to have to take for their own compliance before they put it on the EU market.


Then it puts an Enforcement mechanism in place with fines and other pieces tied to it that are also there to oversee AI efforts. And then, for the EU itself, it establishes a governance structure at both the European and national levels. That's the activity that's going on now. As we lead to a publication of it in the month or so ahead.


So these are the overall areas it's designed to enable that you're going to have to think about as you bring AI into the EU market itself.


Let's start with the basics. First and foremost, there's a lot of confusion about what an AI system is. So the EU AI Act starts by defining that and using that as the bedrock for the rest of the stratification of AI use cases that it goes after. But in the EU's mindset, the definition of an AI system is -

Is this a machine-based system that infers the input it receives how to generate outputs, such as predictions, content recommendations, or decisions that it can affect physical or virtual environments.

This is what we see a lot in general AI with neural networks that are inferring significant training data to then infer conclusions. This can cover generative AI, as I said earlier, and can also cover machine learning and NLP. So if you're operating in any of these spaces, consider yourself inclusive of the EU AI Act's boundaries as far as this definition is concerned.


Now, when we talk about a risk-based approach, they've defined four levels of AI systems that are shown here.

  1. The first is those prohibited or unacceptable risk categories. These are areas like facial recognition, especially used for things like law enforcement and broad use cases, supplemental manipulation, social scoring areas where, as overall, there is a lot of concern about the impact of a generative AI system at scale and what its title ramifications might be.

  2. The second area is applications with high risk. These could be high-risk categories under themselves or verticals like healthcare insurance, financial decisions... anywhere where there's a significant impact on the person or the community based on the recommendations a generative AI system might make or the content it creates. This is where most of the compliance lives that we'll talk about later.

  3. In this discussion, the third area is limited risk applications. The EU is mainly focused on transparency with these, so being able to look under the hood, understand them versus stronger compliance measures. But this is heavy, heavy understanding, and a concern around ubiquitous Human AI activities such as chatbots and service areas. Generally, not a big issue area. A lot of companies use these on the frontline already. But where can this go off the rails? Where do we need to have transparency of what's happening? Same with content generation with deep fakes. And other content that you'll see kind of start to flood publishing channels. So there's some limited risk and transparency they want to go over on those systems.

  4. And then, finally, there's a green (safe) zone with minimal risk applications. This is AI integrated into video games or industrial processes, etc. What we consider low lift areas where there's lower risk involved in the application of AI.

So this is an area, this slide and these designations, you should absolutely have a firm understanding of if you do have risk management responsibilities around AI and its application, the EU AI act.


EU AI Act Prohibited Risk Applications

Let's talk about those prohibited ones. The exact definition used by the AI act is:

AI systems that are deemed a clear threat to fundamental rights, safety, or livelihoods. And they are explicitly banned.

There should be no use cases where you bring these to market.


Prohibited AI systems that are outlined in Title 2, Article 5 of the Act include:

  • Social scoring systems with real-world ramifications,

  • Real-time, biometric identification for law enforcement ... (this was probably one of the biggest sticking points of the negotiations - the severity of biometric and law enforcement use cases. Actual governments are excluded from the EU AI Act's overview, but for private companies and local law enforcement, there was strong concern about the use of these, and it's deeply embedded in the prohibited use cases.

  • Deceptive techniques to distort behavior could define a lot of things. I think a lot of companies are going to have to look for further guidance on what crosses a boundary of deceptive techniques. But there's probably a lot of concern around cybersecurity. Lying to people for financial benefit is probably going to fall deeply into that category. Exploiting vulnerable categories. There's our communities. There's a lot of concern about bias in generative AI systems. And how that then extrapolates into vulnerable communities who might not understand the ramifications of the actions they're taking with an AI system.

  • Biometric categorizations! A little bit covered up above criminal profiling at an individual level. There's a lot of room in in in the AI act for clustering analysis, to draw insights and extrapolate those out. But getting to profiling at individual levels is strictly prohibited. And then in emotional inference in sensitive environments. How do people behave? How do we read people, especially with facial recognition? Technology is prohibited as well under the EU act.


EU AI Act High-Risk Systems

The second area, high-risk AI systems. These are specifically systems that pose significant risks to fundamental rights or safety as defined in the articles here on the slide.

  • Social scoring systems again, with real-world consequences show up,

  • Critical infrastructures that could put the life and health of citizens at risk. Also a lot more. Think your government infrastructures or technology infrastructures. What if someone shuts down using generative AI traffic light systems across a state or a country, utilities like energy and electricity. A lot of those come under this purview.

  • Education or vocational training serves as a gatekeeper to what's considered critical life advancement, access and education. And that's where you see a lot of these come in. It's education, it's finance, it's healthcare, it's insurance. Anything where you see clear gatekeeper access from a gen AI system where we're not comfortable yet with the impact of that. You've already seen that with some lawsuits with healthcare systems on their ML side around denial of care. Decisions based on generative AI involvement. You'll see that come into play as well.

  • Anything that has to do with safety components of products. You see the example there?

  • Same with employment.

  • Private and public services,

  • Credit scoring (that ties on to the financial side).

  • Law enforcement that may interfere with people's fundamental rights. And then migration, asylum and border control. That's more at the government side and the direction towards regulatory agencies on their side, and the administration of justice and democratic processes.

  • There's a strong concern with these systems on election interference and content. So they've got a broad definition of the In the EU AI act to guard against that as a high-risk system anything that might touch on that.


Now, these systems are where that compliance components comes into play so that's where we're going to spend the bulk of our discussion today. Based on their potential to have a material negative impact. Widespread communities. There's a broad and stringent compliance system in place with the EU act for them that runs all the way through every life stage of the model. And when we talk about conformity assessments and the other compliance, it's actually defined by the life stages of the AI system. So from the point it's developed all the way through the point it's retired and decommissioned.


There was a series of steps associated with conformity assessments, and others assessments, that have to be done. This is also true. If any changes happen, what's important is on the right side. If there are any substantial changes in the model or applications, you go back to Step 2 and you start over. They are very clear, and they understand that systems are changing rapidly, but your compliance has to keep refreshing

every time you update, in a material way, the work in the model itself. Okay?


EU AI Act Conformity Assessments

So on compliance. - let's talk Conformity Assessments. First. this is the most robust area of compliance in the EU AI act. What it does is provides a robust framework for conducting evaluation and verification of the AI system itself. We always say here at AI Guardian - a lot of compliance and governance is a series of rote checklists. It's the things you should do every time you're in a dangerous area like this (or potentially dangerous area). But they're pretty straightforward things, right? It's the basics I should do every time, so that we have a hundred percent safety rate. That's what the conformity assessments are asking companies to do.


There are 2 pathways for conformity assessments - an internal pathway or an external one. Most of the time internal assessments are going to be fine (90% plus). External assessments are recommended in cases where there's external expertise or notified bodies that are particular to the industry. Probably more likely in areas like finance, where you've got very strict regulatory environments already in place for notifications. But the choice between internal and external assessment depends on several factors here that we've listed.

  • If there's novel functionalities and it's highly complex. you need to rely on outside expertise and those notified bodies.

  • If you have internal expertise, if you have a strong record track record of AI development, and quality management, you're probably best served by doing internal assessments, and that's allowed under the act.

  • Resource availability. Some companies have this expertise internally and resources available. Many are gonna have to work with outside parties, consultants, legal firms, etc. to help them finish out this work.


Now, when we look at conformity assessments. There are 3 components that are are the primary focus.

  1. First is what we call an internal review protocol. This is the bulk of it. This is the tool and framework for making sure you have quality, assurance and risk management in place around the models and applications you're employing so this will cover a ton of basics. It's defined by life cycle of the of the model, by a standard checklist of checks and balances through the development to retirement period. And that can include things like fundamental rights impact assessments (FRIAs). We're gonna talk about those. Ethics assessments. We've got a full detail on that on our blog for those who want to dive in. But the bulk of it is a series of steps, just like when you do SOC2 compliance or other compliance measures that you have to go through and present statements on of completion as well as documentation.

  2. The second component is the summary data sheet or the SDS. This is actually the couple-page form that gets submitted to the EU office and put in their public database (that is still being worked on, by the way) to make sure there's a public record and a tracked record on their side of your models and your compliance with the EU AI act.

  3. The third area is one that is just recommended. It's optional, but it's recommended as a best practice as an external scorecard or the ESC. We highly recommend this. It's almost like if you go to a restaurant and you see the health report that's publicly posted. It's the same thing here. It's a public health report for your models. You should do this as a best practice. We expect that there are going to be a lot of RFPs that require this from business vendors shortly. As a template, it basically states the four key dimensions for your AI systems: What's the purpose of these systems? How? What are the values going in as well as the data? And then, what governance you're doing around it? And what's your compliance with that? So think of that as as an additional step you can take as a best practice.

All of these provide a way to comprehensively audit your governance with your generative AI systems, and allows organizations to demonstrate the conformance of those with the EU AI act itself.


Fundamental Rights Impact Assessments

The other spotlight area we want to talk about today is fundamental rights impact assessments (FRIAs). These are a little more broadly defined in the current EU AI Act. We would expect them to have further definition down the road. But EU governing bodies are extremely concerned about what are the rights of communities and individuals that might impact be impacted negatively by AI systems. So it's a systemic examination of potential risks that an AI system might pose to those fundamental rights.


What they're asking you to do is upfront, really think through what are the negative things that could happen, and the risks that are could be unleashed because of what you're building, and before you deploy it, you should have gone through that exercise, you should have noted where where you can make improvements or track that so that, as those risks might develop, you have an eye and transparency on it to address them.


These rights cover areas like privacy, non-discrimination, bias, freedom, expression, political areas

and safety. Things we consider - in the Western world at least - as the most important aspects of of our communities.


The process itself involves analyzing the system's design, development, deployment and intended use - all those life stages - to identify potential issues and develop mitigation strategies. What are things we can do to bring down those risks?


Okay. Why are these important? AI systems are a unique technology that has the potential both for good and negative situations to really scale up quickly and have a wide impact through their adoption. And we're most concerned about AI systems because of our own concerns about ourselves, we - with our data and activities - bring human biases and discrimination into certain environments and against certain demographics. The potential for computer and algorithmic systems to amplify those quickly is troubling to a lot of folks in these areas. So what they're trying to do is make sure we think through these issues before we launch.


FRIAs are designed to cover those bases. They're designed as a proactive measure to help identify and address them. Make sure everyone has really thought through, and you can document that. But most importantly, they want you to demonstrate responsible AI practices. What are all the things you've done to adopt responsible AI and bring it into your environment? What have you done to mitigate risks upfront and avoid costly issues on the back end? And then, third, they're designed to make sure you really have improved your transparency and this client ability. Can you look under the hood of these systems and really understand what's going on, especially in real time, as a bias or some other complication might develop? So they really want you to come in and be able to document and show that upfront with these assessments.


So those are the primary components of high-risk systems.


EU AI Act Limited Risk Systems

Let's talk a little more about the the last 2 ... limited risk AI systems. These are defined as general purpose AI, or what's called in a lot of the technical documentation GPA as defined in in the article here. They basically defined as those that pose minimal risk to fundamental rights versus safety in in general. They still have risk associated with them. This is areas like spam filters. Something that AI might help you manage your email inbox. Could that tread into areas of expression or content, sure, based on what gets, shown to you or not but the the ceiling on that risk is a little lower than what we described before. Customer service chatbots. The things that are upfront with you, helping you navigate an issue. Again, these are mostly automated exchanges with customers and frontline. Some risk there of a bias introducing itself, but the stakes or the impact is much lower here. Same with - and they were very careful in the definition - of simple image or video filters. Deep fake technology, heavily manipulated technology of video audio images is in that high-risk category. But here, simpler things like a TikTok filter falls under these. It might address your brightness, how you look a little bit, do something fun or interesting with the photos you share. But they don't tread into facial recognition or emotional analysis. It's just for the fun filtering side. Those all fall under limited risk systems in general purpose.


Like I said earlier, there's not heavy compliance requirements here. But there is a strong push by the EU to have these companies and these systems adhere to best practices voluntarily without being compelled. Now, if we have big events that cause sentiment to shift on AI, that voluntary status might become a required compliance item later. But what they want limited risk AI systems to do is to embrace what high-risk systems have to do because they're just good governance. Conducting risk assessments. It might not be a full conformity assessment but you should be evaluating your AI systems for potential risks and have a risk management system around that. To mitigate those risks, developing technical documentation. This gets back to a level of transparency. Do you have documentation outlining the system's development process data functionalities outputs? Is that something that's shared across the entire team and there's broad understanding of it within your organization? So they're looking for that. And then self declaration, compliance. External scorecards. Do you have something that in some ways with your customers or outside affected parties, declares that your systems adhere to the requirements which are really good governance and responsible AI requirements? So they're looking for you to take some public stance on those and build trust with these systems based on that.


EU AI Act Minimal Risk AI Systems

And then last, we have minimal risk AI systems which also fall under the technical definition of general purpose in the Act. These systems are deemed unlikely to cause harm. Even on the edge use cases, the likelihood of harm or strong impact is minimal. They face minimal regulatory requirements. But continued encouragement of governance best practices.


Simple games with AI elements, mobile games that affect character behavior of others you might interact with in that system are covered under here. Spam detection algorithms, the ability to weed out negative things coming into your world. Basic educational tools. Basic things that can help you on learning a new language or math or any other subject you might be interested in. And AI can help create basic recommendation systems. You'll see this in retail a lot. An online shopping or music streaming or content... But what's a basic recommendation session system based on? Your previous history. Falls under this category. AI powered image enhancement tools. Again, this falls here. But there's some slight stylistic differences between some of those that fall under Limited and some of those that fall under Minimal. So if you fall under here, best practices are still encouraged but not required by the act.


EU AI Act Enforcement Timelines

We've defined the risk categories, and we've talked about the compliance within those. Let's talk about enforcement timelines right now. This is a little bit of a moving target because everything really kicks off with the publication of the Act in the official journal of the European Union. From that point the clock is ticking towards enforcement periods. That date exact date, as of the beginning of April, has not been set. But we expect it to be shortly. In the next 30 days. The stage we are at is translating the approved Act into the multiple languages of the EU documenting all that has been agreed. That's what's holding up official publication. There's no more votes to be taken. Once that (publication) happens, you see in the in chart below -

  • Prohibited AI practices: Enforcement of those goes into effect 6 months from that point. So we expect that in Q4 of this year.

  • General purpose AI overview within one year from enactment. So we expect that in Q2 of 2025 to start to be enforced.

  • And then overall compliance, and particularly of those compliance elements of the high-risk systems, is 2 years from enactment. So we would imagine those come into play Q1 maybe Q2 of 2026

But there's a lot of work on those that companies have to do to have them ready by that date. So we're going to see most companies in the world really doing the bulk of that work late this year and into 2025 to be ready for those deadlines in 2026.


The other interesting part, too, is the Act, while a broad enforcement mechanism for the the territory of the EU actually has a lot of structural components in place to help encourage the other Member States (each individual country) to do more work on this. So we would fully expect certain countries like Germany, France, UK, Italy to come out with their own additional regulations on top of this for being able to bring AI to market in their countries.


Just like we see in the US Federal-level regulation /White House-level regulation and state level, right? So just be on the lookout that this is the the model legislation that a lot of those countries will use and then, from there, they'll adapt it for their own needs by country.


EU AI Act Penalty Structure

Penalty structures for this are quite steep. I think the EU learned with GDPR that the the high enforcement standards, while only employed in a few instances, those instances are strong signals to the rest of the market to get in line with its its edicts. And so the EU AI Act has significant teeth here. The penalties range from 1.5 to 7% of a firm's global sales revenue based on the violation type. And obviously, the starting point for this is supplied incorrect and complete or misleading information. If you don't do your your paperwork properly, or you you mislead in what you submit. There's a up to a 1.5% potential sign of your global revenue, or in the EU parlance turnover. Violation of EU AI Act = obligation.


Not submitting materials required or lack of transparency on what you're doing is obviously a level of severity higher. That's up to 3% of that global revenue.


And then obviously, the stiffest penalties are for violation of banned AI applications as prohibited ones.

What's particularly important to the EU is noncompliance related to requirements on data. They've got very strict regulation on the biggest players in the world in general of AI, their model requirements and their data requirement disclosure. So anything that's banned or in that realm. As the stiffest penalties of up to 7% of global revenue under it.


Penalties will be determined based on the severity of the infringement based on the judgment of the newly created European AI office. And they really define everything they've got. Technical definition of what's a minor infraction versus a significant infraction. And these are important to understand within the context of those violations that it can occur.


Minor infractions are failing to provide adequate documentation or transparency. Minor noncompliance of technical standards, inadequate implementation of data use or data protection measures with limited impact. A lot of times, when we've seen this with previous and other regulatory environments - usually it's errors without intent. We did certain things, and we didn't mean to violate, but we did. Usually that's treated a little less harshly than significant infractions where there was intent plus the action.


When we look at significant infractions - deploying high-risk AI systems without conducting conformity, assessments - clear violation. Significant infraction shows intent using banned AI practices themselves. Obviously significant breaches of data privacy. So security, particularly involving sensitive data. There's a higher bar as it is, with all regulation for firms who are privileged to have sensitive data. They have a responsibility to safeguard that at an extremely high level, and obviously having that breached comes within a significant infraction.


And then impact is the last one. Noncompliance leading to direct harm to fundamental rights or public safety - there's significant community impact - there's going to be a much harder and higher penalty and

structure around that than something less.


So that's the penalty structure for the EU AI act. Again, we would expect this also to be a case where they look at larger players first and work their way down in their evaluations. And then hopefully use those larger players as a guideline for where people need to be.


Prepare for the EU AI Act Now

So last section of this real quick. What we should you be doing now? How do you need to be prepared for this? Like I said upfront, this is an extraterritorial regulatory act, which means it doesn't matter where you are in the world. At the end of the day, if you do any business with the EU (in those markets), you're going have to conform, globally, to this Act.


In that way, we really feel this is something that almost every company, especially your larger companies in the world, are going to have to conform with just like they do with GDPR and other legislation/regulation that's come from there.


When we talk to our partners, we tell them you've got a wonderful window right now to be prepared, and it gives you an opportunity for you to prepare and bring responsible AI best practices into your environments. Because at the end of the day, if you're familiar with NIST AI RMF here in the US or other frameworks around responsible AI, the EU AI Act is simply a regulation that builds off the back of those frameworks. And if you're complying with those frameworks and leading with responsible AI, you're covering 90% of the basis of the EU AI Act already.


What we say is, first, you need to be able to map what you're doing today - the systems you're using the applications and their use cases so that you can understand from a worst-case regulation approach. What falls into each of those risk categories? You absolutely need to determine which systems and models are within the scope. Well, first what you have, and when you do what falls within the act, scope and conduct gap analysis against key requirements.


You need to do that in a centralized system of record - there needs to be documentation to it. You can't just do this through a a PowerPoint and an Excel spreadsheet. You really need a true software solution that has something that can be dipped into and used across a broad team at your organization.


Second, you absolutely need to strengthen your data governance. We talked about that on significant infractions in the last slide. But there is going to have to be enhanced methods for both. How you collect data, store it and then how you use it. And this not only aligns with the EU AI Act's stipulations but also fortifies your defenses against breaches and misuse.


Adherence to emerging technical standards will be a key factor in ensuring compliance. While the Act provides a solid framework, the specifics of compliance, including the forms and documentation required, will evolve. Staying abreast of these developments and integrating them into your operational practices is essential.


The regulatory environment is also set to become more nuanced, with individual EU member states likely introducing their own AI governance frameworks. It's vital to consider these regional specifics when crafting your compliance strategy, ensuring that your AI applications meet both the overarching EU requirements and those of individual jurisdictions.


Preparing for the EU AI Act isn't just about meeting regulatory demands; it's an opportunity to embed responsible AI practices into the fabric of your operations. To this end, establishing an AI governance committee, issuing a clear AI policy, and setting project approval criteria are foundational steps. These measures not only aid in compliance but also in harnessing the transformative potential of AI responsibly and ethically.


Looking ahead, the landscape of AI regulation will continue to evolve, not just within the EU, but globally. Anticipating and adapting to these changes is essential for maintaining a competitive edge and ensuring that AI is used in a manner that is beneficial and sustainable. This is a journey that requires diligence, foresight, and a commitment to excellence in AI governance.


In closing, I encourage businesses to view the EU AI Act not as a hurdle, but as a catalyst for adopting best practices in AI development and deployment. By taking proactive steps today, we can navigate the complexities of compliance, mitigate risks, and unlock the full potential of AI technologies.

bottom of page