Skip to content

Strengthen Your Compliance Program Now for the AI Risks to Come

In a previous post we explored the EU Artificial Intelligence Act and how compliance officers must start now to address all the compliance challenges the AI Act is going to bring (and bring quite soon, since the law went into effect at the start of August). 

Now we need to widen the lens to all the other AI laws and regulations coming over the horizon — because there are lots of them, and compliance officers will need to juggle the demands of those rules, too. 

Rather than taking a piecemeal approach, responding to new AI regulation after another, chief compliance officers should step back, consider the larger picture, and then work with the rest of your enterprise to fashion the compliance procedures and capabilities you’ll need.

First let’s look at what some of those other AI rules are.

Recap of Laws, Regulations

The EU AI Act is the most complicated and far-reaching rule for artificial intelligence so far, but it is by no means the only one. 

  • Multiple U.S. states are already moving ahead with their own laws to govern artificial intelligence. For example, Colorado passed a law in May (going into effect in 2026) that requires all “developers and deployers” of artificial intelligence to take numerous steps to avoid AI-based discrimination. California, Delaware, Utah, Virginia, and others are close behind. 
  • Industry-specific regulators are moving ahead with regulations at both the state and federal levels. New York’s Department of Financial Services has adopted a rule for insurers operating there to implement a rigorous set of controls for AI. The U.S. Commerce Department is developing rules that will require testing of powerful AI systems to be sure they have appropriate cybersecurity. 
  • Existing rules that touch on privacy and disclosure are being extended to encompass AI as well. For example, the U.S. Federal Trade Commission has already brought enforcement actions over sloppy use of AI systems that led to violations of consumer privacy; the U.S. Securities and Exchange Commission has brought enforcement actions for “AI-washing,” where companies make misleading statements about AI to investors. And as we noted in our previous post, the EU General Data Protection Regulation has several privacy provisions that could be triggered by an AI system processing personal data without proper consent.

The good news is that all of these laws do rest on a few fundamental principles. For example, they typically require companies to perform a risk assessment on the companies’ use of AI, and to implement additional protections for “high-risk” systems. They also typically require companies to disclose when they are using AI, to perform regular security testing, and to document the controls used to keep AI risks in check.

So if a company can develop those fundamental compliance capabilities, it’s likely to fare much better in compliance with specific AI rules and laws as those measures come online.

Start With Governance and Values

Compliance officers can play a crucial role in these early stages of AI because regulation of the technology is still so new. In the fullness of time, companies will probably face all manner of compliance risk for how they use artificial intelligence in their enterprise — but we’re nowhere near that point yet. Right now, at this fundamental stage, companies need to focus more on the basic principles they will follow as they start integrating AI into their operations. 

Well, articulating basic principles and core values is something corporate compliance officers are quite good at.

For example, your company could start by drafting some sort of “Responsible Use of AI” declaration that emphasizes the core values employees must follow when they start tinkering with AI and weaving it into their daily routines. (Tech giants such as Microsoft, Google, and Meta have all drafted such statements.) 

The values themselves aren’t rocket science, either:

  • Transparency about when someone is interacting with AI
  • Privacy for any personal data an AI system might use
  • Security of the system, so outsiders can’t tamper with its operation
  • Inclusiveness, so that no group (say, a racial minority) suffers discrimination or other poor treatment from AI

(If you want further inspiration on what a Responsible Use statement might contain, study the statements published by Microsoft, Google, Meta, and other technology giants, or other examples you might find online.)

Obviously compliance officers should not undertake this project in a vacuum. Ideally, the effort should involve senior management, operations leaders in the First Line of Defense, and even the board, so that they all understand — and support — the company’s approach to AI. 

From Principles to Policies, Tools, and More

Once the company defines its fundamental vision for how it wants to adopt AI, then you can start expanding into more precise policies and procedures. 

For example, if the company includes privacy as a fundamental value, you can draft specific policies about how personal data is (or is not) fed into AI systems. Those policies should align with objectives of the EU AI Act and other laws, such as gathering user consent or deleting personal data when the information no longer has a business purpose.

Then you might consider the tools necessary to put those policies into practice. For example, if the objective is to be transparent with consumers about when they interact with an AI system, you might need a policy management tool that makes that requirement clear to all employees across the enterprise. Then follow it up with occasional audits to confirm that all consumer-facing systems follow that rule. 

Or perhaps you have an outsourced customer service center, and you want to be sure that contractor never uses any undisclosed AI on your behalf (which would violate the EU AI Act and possibly other AI laws and regulations). You might need expanded third-party risk management tools for contract management, due diligence, monitoring, and so forth. 

Conclusion

Ultimately, your AI compliance program will be a complicated endeavor for many rules that haven’t even been written yet. That’s OK. The fundamentals for AI compliance are all about the strong compliance capabilities that companies have needed for years: risk assessment, policy management, testing, training, and so forth. 

For now, work on those fundamentals as you explore how AI will bring new demands to your program. And remember, compliance officers have the chance to influence the company’s overall ethical stance on AI, and to keep the corporate culture on the right path. Seize that chance before it slips away.


Matt Kelly

Matt Kelly is an independent compliance consultant and the founder of Radical Compliance, which offers consulting and commentary on corporate compliance, audit, governance, and risk management. Radical Compliance also hosts Matt’s personal blog, where he discusses compliance and governance issues, and the Compliance Jobs Report, covering industry moves and news. Kelly was formerly the editor of Compliance Week. from 2006 to 2015. He was recognized as a "Rising Star of Corporate Governance" by the Millstein Center in 2008 and was listed among Ethisphere’s "Most Influential in Business Ethics" in 2011 (no. 91) and 2013 (no. 77). He resides in Boston, Mass.

Implement a tailored Third-Party Risk Management solution