Skip to content

AI Governance: Mitigating AI Risk in Corporate Compliance Programs

 If you're like most compliance officers, you've probably been swamped with pitches from vendors touting artificial intelligence (AI) as the next big thing for streamlining your compliance and third-party risk management programs. And, in many cases, these vendors are not wrong—when used thoughtfully, and applied in the right way, AI can be an important tool for driving efficiencies and delivering scale for under-resourced teams.

But, the AI conversation is much larger than shiny new tools. Especially for compliance. Because there’s a whole new conversation (and program) that needs to be on your radar - and that’s how to identify, monitor and manage the kinds of risks AI itself might introduce to your business. 

These risks are not lost on the policy makers. From the European Union's Artificial Intelligence Act that came into effect 1 August 2024,  to President Biden's Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence, governments and regulators worldwide are increasingly recognizing the importance of establishing frameworks to manage the business, systemic and societal risks associated with AI. 

And with increased risks, comes greater regulatory scrutiny. The pressure is on compliance and risk professionals to integrate AI risk assessment and mitigation strategies into their corporate compliance programs.

AI Risk And The DOJ’s  Guidance On Evaluation Of Corporate Compliance Programs

The US  Department of Justice (DOJ) has made it clear that AI Risk Management is in their crosshairs. 

In a recent speech to the American Bar Association’s 39th National Institute on White Collar Crime,  Deputy Attorney General Lisa Monaco,  stated in her keynote address: 

“…we’re applying DOJ tools to new, disruptive technologies — like addressing the rise of AI through our existing sentencing guidelines and corporate enforcement programs.

Where AI is deliberately misused to make a white-collar crime significantly more serious, our prosecutors will be seeking stiffer sentences — for individual and corporate defendants alike.”

Her call to compliance officers was clear:

“And compliance officers should take note. When our prosecutors assess a company’s compliance program — as they do in all corporate resolutions — they consider how well the program mitigates the company’s most significant risks. And for a growing number of businesses, that now includes the risk of misusing AI.

That’s why, going forward and wherever applicable, our prosecutors will assess a company’s ability to manage AI-related risks as part of its overall compliance efforts.”

Compliance officers can expect to see updates to the DOJ’s  guidance on Evaluation of Corporate Compliance Programs to include assessment of disruptive technology risks — including risks associated with AI.

How Can Compliance Start To Tackle AI Risks?

That’s a big ask. Now, on top of everything else, already strained compliance teams are being asked to integrate AI Risk Management into their programs. Where do you even start?

Here are several strategies a compliance officer can employ to effectively manage AI risks:

  1. Understand AI technologies and their implications

    If you haven’t already, work closely with technical experts within your organization (such as AI engineers and data scientists) to gain insights into how AI systems are developed, trained, and deployed. Compliance officers need to gain a solid understanding of AI technologies used within their organization and the specific risks each technology may pose, including privacy and confidentiality issues, bias, and inaccuracies.

  2. Understand  the intersection of where AI adoption and regulatory risk meet

    Collaborate closely with the organization’s legal and IT departments to ensure AI practices comply with all relevant laws and regulations, including those related to data protection and anti-discrimination.

  3. Develop AI governance frameworks

    Establish clear governance structures and frameworks for AI use that align with legal requirements, ethical standards, and best practices. This includes defining who is responsible for AI oversight, how AI risks are assessed, and how AI use is documented and reported.

  4. Implement AI policy

    A company can’t properly disclose its risks of AI if management doesn’t know who within the company is actually using AI. Consider adopting a policy declaring that any employees who want to use AI must first submit their plans for management review.

  5. Conduct regular AI risk assessments

    Implement ongoing risk assessments to identify and evaluate the risks associated with AI deployment, including data privacy concerns, algorithmic bias, and potential misuse. Use these assessments to inform risk mitigation strategies. You can see examples of what types of questions you might include in an AI risk assessment below.

  6. Monitor regulatory developments

    Stay informed about regulatory developments related to AI and adjust compliance programs as necessary. This includes international, federal, and state regulations that may impact the organization’s use of AI.

  7. Develop incident response plans

    Prepare for potential AI-related incidents by developing response plans that outline how to address issues such as data breaches, biased outcomes, or other unethical uses of AI. This should include mechanisms for reporting and resolving issues.

  8. Ensure transparency and explainability

    Work towards enhancing the transparency and explainability of AI systems in use in your organization from key stake-holders - which could include product design and development as well as third-party/vendor relationship managers. This involves being able to articulate how AI models make decisions, which is crucial for regulatory compliance, ethical considerations, and building trust among stakeholders.

  9. Promote AI literacy

    Foster an organizational culture that promotes understanding and responsible use of AI across all levels. This includes training programs for employees on the ethical use of AI and the importance of data privacy and security.

  10. Engage with External Stakeholders

    Engage with customers, users, and the public on AI use and governance, taking into account their concerns and feedback. This can help in anticipating potential issues and reinforcing the organization’s commitment to ethical AI use.

AI Risk Assessment Questionnaires

An AI risk assessment questionnaire is designed to identify and evaluate the potential risks associated with the deployment and use of artificial intelligence within an organization. It should cover a broad range of areas, including technical, ethical, legal, and operational concerns. Below is an example structure of such a questionnaire.

General Information

  • AI System Description: Provide a brief description of the AI system, including its purpose, capabilities, and the technology it uses.
  • Deployment Stage: Is the AI system under development, in testing, or fully deployed?
  • Key Objectives: What are the primary objectives of using this AI system?

Technical Risks

  • Data Quality and Source: What are the sources of data used by the AI system? How do you assess and ensure the quality and accuracy of this data?
  • Model Complexity: How complex is the AI model? Are there layers or aspects of the model that are not fully understood by your team?
  • Security Measures: What cybersecurity measures are in place to protect the AI system and its data?

Ethical and Societal Risks

  • Bias and Fairness: How do you assess and mitigate biases in your AI system? Have you conducted fairness assessments?
  • Transparency and Explainability: Can the decisions or outputs of the AI system be explained in understandable terms to users?
  • Privacy Concerns: How does the AI system handle personal or sensitive data? Are there measures in place to ensure data privacy and compliance with regulations (e.g., GDPR, CCPA)?

Legal and Compliance Risks

  • Regulatory Compliance: How does the AI system comply with relevant laws and regulations?
  • Intellectual Property: Have you considered the intellectual property implications of your AI system? Are there any potential infringements? Is your use of AI training public large language models (LLM’s) that could infringe IP rights or expose sensitive company data?
  • Liability and Accountability: In the event of a failure or harm caused by the AI system, who is held accountable?

Operational Risks

  • Integration: How is the AI system integrated into existing workflows and systems? Are there any operational challenges?
  • Scalability: Can the AI system scale according to business needs without compromising performance or security?
  • Maintenance and Updates: How frequently is the AI system updated or maintained? Who is responsible for this?

Impact Assessment

  • Stakeholder Impact: How does the use of the AI system impact various stakeholders, including customers, employees, and partners?
  • Societal Impact: Are there any broader societal impacts of deploying the AI system that have been identified?
  • Risk Mitigation Strategies: What strategies or measures have been implemented to mitigate identified risks?

Documentation and Monitoring

  • Documentation: Is there comprehensive documentation for the AI system’s development process, including data sources, model training, and decision-making processes?
  • Monitoring and Evaluation: How is the AI system monitored post-deployment? Is there a process for evaluating its performance and impact over time?

The integration of artificial intelligence into compliance programs isn't just an option—it's an expectation. Compliance officers are tasked with a formidable challenge: navigating the myriad risks that AI can introduce, in a rapidly changing, and sometimes nebulous, AI landscape. 

The call from global regulators is clear: AI risk management is essential, and compliance frameworks must evolve to include these new dimensions of digital operation. By establishing robust AI governance frameworks, conducting regular risk assessments, and fostering an organizational culture of AI literacy, compliance teams can put the foundations in place for an effective and defensible compliance program.

Those who do not, face the long arm of the law. Regulators are making good on their warnings that companies can not engage in making misleading statements to the public about how they use AI (AI washing) and AI enforcement actions have begun. 

Looking ahead, as AI continues to reshape the landscape of risk and compliance, staying informed and adaptable will be key. Compliance officers must not only keep pace with AI developments but also anticipate how changes in the regulatory environment will impact their strategies. 

Contact us to find out more about how you can better integrate AI Risk Management into your Corporate Compliance Program without the need for long, expensive IT projects.

Implement a tailored Third-Party Risk Management solution