The European Union AI Act went into effect at the beginning of August, and corporate compliance teams need to get moving right away. The law touches on so many issues relevant to corporate compliance — Codes of Conduct, policies and procedures, third-party oversight, regulatory reporting, and more — that even though enforcement won’t begin until 2026, you will need every moment of the next two years to prepare.
At the highest level, we could describe the AI Act as a law meant to govern how businesses use artificial intelligence, so that EU citizens’ privacy and other rights are protected. Companies will need to develop mechanisms to assure that they only use AI in permitted ways and with proper precautions applied, so that those EU rights are upheld.
At the operational level, however, achieving those compliance goals will be a complicated endeavor. The AI Act is long (459 pages), and as we noted above, it imposes a host of obligations on companies — and imposes them on a host of companies, too. Businesses that provide AI systems, use AI in their own operations, or even just use the output of AI systems will all be subject to the law.
Enforcement will unfold on a rolling basis starting in early 2025, although many provisions won’t be subject to enforcement until 2026, and the last not until 2027. Still, non-compliance could be a costly ordeal for companies: anywhere from 1 to 7 percent of global turnover, depending on the precise violation.
More than anything else, compliance with the AI Act is simply going to require a lot of work on the part of compliance officers. Let’s take a look at some of the main challenges.
First, Know Your AI Use Cases
The AI Act classifies artificial intelligence systems into several categories of risk; the higher the risk, the more precautions companies must put in place to use that AI system. So the very first step for compliance officers is to identify all the ways your company plans to use AI and understand which risk category applies to each use case.
A few use-cases are deemed so dangerous to consumer privacy and protection that the AI Act bans them entirely. For example, none of the following will be allowed under the law starting February 2025:
- Systems that create some sort of “social behavior score” a government might use to monitor its citizens’ behavior.
- Real-time biometric monitoring systems, such as facial recognition systems used in public places without anyone’s consent.
- “Sentiment analysis” systems that might, say, monitor employees’ behavior and activity in the workplace to draw conclusions about their emotional state.
Other use-cases are grouped into “low,” “limited,” and “high” risk. For example:
- Spam filters or AI-enabled video games (low risk, and can be widely deployed)
- Chatbots or image-generation software (limited risk; people must be informed when they’re interacting with AI or viewing AI-generated images)
- Employment screening software or systems to manage critical infrastructure (high risk, requiring a thorough risk assessment, employee training, and audit logs)
Since the compliance measures you’ll need to implement will depend on the AI use-cases that apply to your organization, compliance officers will first need to consult with operations teams across the enterprise to understand how your company is using AI.
For example, you may want to declare an “AI amnesty” where anyone in the company can step forward and disclose how they’re already using AI. You should also consult with senior management, the technology team, and leaders of First Line operating teams to understand their future plans for AI, so you can implement necessary compliance precautions in a coordinated fashion.
Second, Use an AI Risk Framework
The specific compliance precautions you’ll need to implement — documentation, security tests, employee training, contract clauses with third parties, updates to the Code of Conduct, disclosures to customers or job applicants, and so on — are many, and will vary depending on the specific use-cases you identified above.
The overarching challenge for compliance officers will be to coordinate all that work and assure that it gets done by necessary deadlines. Therefore, compliance officers should use an AI risk management framework and some sort of GRC tool to keep their remediation work on track.
The good news is that several AI risk management frameworks already exist:
- The AI Risk Management Framework published in 2023 by the U.S. National Institute of Standards and Technology (NIST).
- The ISO 42001 standard for management of AI systems, released in 2023 by the International Organization for Standardization.
- And earlier this year Japanese regulators published a set of voluntary guidelines for how businesses should develop and use AI.
The AI Act requires companies to use a risk management framework to govern their high-risk AI systems, such as AI used to screen employment candidates or to extend credit terms to customers; but it doesn’t specify which framework a company should use.
The challenge for compliance officers here is to ensure that you have the right GRC tools and capabilities in place so that when you do choose a framework (NIST or ISO, most likely), you’ll be able to implement that framework swiftly and efficiently. Then you can get on with the work of performing gap assessments to understand what remediation steps you need, assigning remediation tasks to specific people, collecting documentation about what has or hasn’t been done by deadline, and so forth.
Learn From Your GDPR Experience
By now some compliance officers might be thinking, “Wait a minute, this all feels a lot like my GDPR compliance project back in 2018.” That’s quite perceptive — the EU General Data Protection Regulation overlaps with the EU AI Act in multiple ways, and that has implications for your AI compliance program.
For example, several articles of the GDPR address how a company processes personal data:
- Article 6, which says data can be processed only with user consent;
- Article 9, which says special categories of personal data can only be processed with explicit consent;
- Article 22, which prohibits automated decision-making without consent.
The word “processes” could just as easily refer to artificial intelligence as to any other business process run by human employees. So in many cases, careless use of personal data by an AI system could also be a GDPR violation — and it would be a violation right now, even though enforcement of the AI Act is still two years away.
This overlap of the GDPR and the AI Act is a mixed blessing. On one hand, it means you might be able to use one set of policies, procedures, and controls to satisfy both laws. On the other, if you have an AI malfunction that causes a GDPR violation, you might need to report the incident twice: once to your local privacy regulator, and again to your local AI enforcement agency (which don’t yet exist, but they are coming soon).
So even now, while AI Act compliance is in its infancy, compliance officers should revisit their GDPR compliance programs; the stronger your GDPR program is, the easier your path to AI Act compliance will be. Capabilities such as data mapping, third-party governance, user consent procedures, breach reporting processes — all of them will be just as valuable for AI Act compliance as they are now for GDPR compliance.
Conclusion: Get Cracking
The above points barely begin to address all the compliance challenges companies will face with the EU AI Act — and don’t forget, companies will also need to comply with other AI laws and regulations coming over the horizon in the United States and elsewhere. (We’ll review those other laws and regulations in another post.)
This will be a huge undertaking. Compliance officers must start now by analyzing how the AI Act applies to them, establishing the right working relationships with other parts of your enterprise, and assuring that you have the right tools and capabilities in place to handle the work ahead. Some of that work will feel familiar. Other parts of the project will be entirely new.
All of it, however, must be done within two years or your company risks serious enforcement risk. So start now.
Matt Kelly is an independent compliance consultant and the founder of Radical Compliance, which offers consulting and commentary on corporate compliance, audit, governance, and risk management. Radical Compliance also hosts Matt’s personal blog, where he discusses compliance and governance issues, and the Compliance Jobs Report, covering industry moves and news. Kelly was formerly the editor of Compliance Week. from 2006 to 2015. He was recognized as a "Rising Star of Corporate Governance" by the Millstein Center in 2008 and was listed among Ethisphere’s "Most Influential in Business Ethics" in 2011 (no. 91) and 2013 (no. 77). He resides in Boston, Mass.