There is a lot of excitement across all industries when it comes to the promise of artificial intelligence (AI). But, with any new technology and innovation comes challenges and potential risks, as well.
In the case of AI, it’s important to both embrace the opportunities, while also put in place governance principles to ensure it is used responsibly, especially when it comes to compliance and risk programs.
Good AI governance is all about understanding the opportunities and challenges that come with this type of technology, while also creating a solid framework for responsible, secure, efficient AI usage.
Avoiding AI Pitfalls and Hallucinations
While it’s important to not spread fear unjustly, poor data governance can easily lead to cases of AI unintentionally exposing sensitive data.
Compliance technology that leverages AI must handle data with confidentiality and integrity, which has been core to GAN's Integrity Platform from day one.
Another area of concern is AI hallucinations- where large language models have the potential to create incorrect information. These mistakes can range from inaccurate facts, to incorrect quotes, with the potential to create the spread of misinformation, or serious legal and reputational issues.
While these instances should not deter organizations from using this technology, it’s important to be aware of potential risks.
GAN Integrity’s Principles for Good AI Governance
The core element of AI is that it recognizes and uses patterns. In many cases, it will produce correct information, but it is not foolproof. Thus, it is important to have AI governance principles in place to help ensure AI is being used responsibly, accurately, and not replacing human decision-making.
Below are GAN Integrity’s best-practice principles to put in place when implementing AI into your ethics and compliance programs.
Setting an Example for Responsible AI Usage
As seen, AI can offer a lot of benefits for ethics and compliance teams. If you're involved in the governance of AI, then you must set an example for the rest of the organization as to what responsible AI looks like. This involves integrating AI thoughtfully and ethically into business processes.
Organizations should establish clear guidelines and best practices for AI implementation and usage, tailored to the specific needs and goals of the compliance programs. This includes selecting appropriate AI capabilities for different tasks, providing comprehensive training to all relevant team members, and creating a centralized knowledge base for sharing AI-related insights and experiences.
By demonstrating a commitment to ethical AI use, organizations not only enhance their operational capabilities but also build trust with customers and stakeholders.
Use Case Always Comes First, Then AI Technology
According to a Deloitte Q3 2024 report, two out of three companies surveyed stated that they are increasing their investments in generative AI technology. With this surge in the popularity of AI solutions, it can be difficult to find the right vendor and approach implementing AI in a way that is responsible and secure, while still allowing for productivity gains.
One of the pitfalls in AI technology is being distracted by the excitement of AI, but not having a specific use case for it in mind. Many technology vendors will promote that they have AI, but it’s unclear how they’re using it or why. Essentially, you’re putting the solution before the problem.
Use cases for AI in compliance programs could include:
Gathering initial risk data for a third party
Identifying sanctions or other incidents
Vendor research
Personalized third-party questionnaires
For ethics and compliance teams, it’s important to identify and understand what your goals are for AI, and capabilities you would like to see from vendors that will help meet these goals.
For GAN Integrity, we always ask, what is the problem that you face as an ethics and compliance team? And then, is the AI technology that's out there today the best way to solve it? In some cases, AI might not be the right tool to solve the problem. Sometimes it will. But understanding what the problem is first, then finding the technology is a good best practice.
Augmented Intelligence, Not Artificial Intelligence
Another crucial AI governance principle for compliance teams especially, is thinking of this technology as augmented intelligence, not artificial intelligence. Think of AI as a tool to help people to speed things up and make things more efficient.
It is another solution in a compliance professional’s toolkit to help them work better, but should never be a replacement for the professional.
In compliance especially, where decisions have to be explainable, AI, especially black box AI, should never be the thing making the decision. Humans need to be the ultimate deciding factor.
Using AI to Better Cover Your Third-Party Information
It’s no surprise that working without the help of technology is a long, manual process. Analyzing reports, internet searches to look up potential third parties, summarizing data, and understanding what’s out there- that’s a lot to focus on.
Third-party assessments and questionnaires are a particular area that can benefit from AI technology usage. But, this stage of the third-party lifecycle can take 60 to 90 days as third parties fill out surveys and questionnaires. The best questionnaire is the one that you don't have to send.
Risk intelligence integrations embedded within your compliance management platform use AI to pull risk data quickly and efficiently. This provides teams with a baseline of data to understand the risk a potential third party could pose to the organization, and if they want to move forward with a relationship.
AI Can Help You Uncover Hidden Risks
Compliance teams only have so much time, and so many people to sift through thousands of data points. With the traditional approach to risk scoring, concentrating on the high-risk third parties and putting the efforts there, it's possible that you're missing risk in the long tail of your third parties.
Even if there is just a report of an allegation regarding a third party, you want to know about it and understand it. When AI goes out into the web, it can scrape and find this type of information and summarize it for teams. Then, they can uncover hidden risks that you may otherwise would not have found from more traditional methods.
AI doesn't sleep. It’s a valuable tool and it's quicker than a human. In many cases, it’s just as consistent. AI helps you by covering a wider base of your third parties. But, you need to be the one to understand the risk and make the decisions.
GAN’s Integrity Identify Solution
A centralized tool like GAN’s Integrity Identify™ uses AI capabilities for compliance, risk scoring, and automation to bring all of this data and insights together so that you can make the right decisions, informed by the technology.
Speak with an expert to learn more about Integrity Identify and keep an eye out for our future survey on AI Governance.
Our CTO, Neil brings over 15 years of experience in boldly delivering data-driven software products across the oil & gas, logistics, and video surveillance industries.
With an Industrial PhD in Applied Physics, Neil honed his skills in decision-support software development for oil & gas exploration and production. He worked with industry giants like Shell and Maersk Oil then, in 2017, transitioned to Maersk’s logistics division where he played a key role in expanding the digital team and revolutionizing the data science discipline.
Before joining GAN Integrity, Neil spent a year at Milestone Systems working on early-stage product development for their cloud-based data intelligence platform. Neil’s expertise in data-driven software development and leadership makes him an invaluable asset to GAN Integrity’s technology strategy and execution. Technological development is one way GAN Integrity differentiates itself and Neil is endlessly innovative in this domain.
Implement a tailored Third-Party Risk Management solution