As businesses accelerate their adoption of artificial intelligence and generative AI, the stakes are high. While these technologies promise unprecedented innovation and efficiency, they also bring significant risks that must be carefully managed.
The decisions made today will shape not only the future of individual companies but also the broader societal landscape. For compliance leaders, the challenge is clear: to ensure that AI advancements align with both ethical standards and regulatory requirements.
To meet this challenge, establishing robust AI governance from the outset is essential. This involves integrating ethics, risk management, and accountability into AI adoption strategies, while also ensuring that oversight mechanisms can keep pace with the rapid expansion and complexity of AI use across the enterprise.
The AI Governance Benchmarking Survey
Recently, GAN Integrity partnered with Compliance Week to conduct a comprehensive survey of over 230 compliance, risk, and IT professionals across various industries between November and December 2024. The survey explores:
- Roles and responsibilities for AI governance within compliance teams
- AI governance program maturity
- Visibility into third-party AI use
- The AI regulatory landscape
In this blog post, we will delve into the insights from this survey, exploring how organizations are navigating the complex landscape of AI governance and what lessons can be learned from their experiences.
IT is Leading AI Governance in Most Organizations - But Should It?
Results indicate that IT is the most frequently assigned leader for AI governance, but this should give organizations pause. Regulators expect AI governance to be an executive-level priority, with strong compliance, risk, and ethical oversight.
- 40% of organizations assign AI governance leadership to IT, while only 25% place it under Compliance and 13% under the Board.
Regulators advocate for cross-functional leadership, which highlights the complexity of AI governance. It's not just about technical implementation but also about ensuring that AI systems are transparent, ethical, and compliant with regulatory standards.
Organizations should consider reevaluating their governance structures to ensure that AI oversight extends beyond IT, incorporating senior-level cross-functional leadership to manage risks effectively.
AI Governance Maturity Is a Work in Progress
Data shows that AI governance maturity varies widely across organizations, with many still in the early stages of adoption.
- 39% of organizations have yet to implement AI governance, while 31% have adopted AI governance through ad hoc approaches.
- Only 8% report having a mature, structured AI governance program.
As regulatory scrutiny increases, compliance teams must take the lead in transitioning from fragmented, reactive governance to well-defined, proactive strategies. This involves ensuring that AI use is ethical, transparent, and legally compliant.
AI Adoption Outpaces Governance Efforts
The pace organizations are adopting AI, including AI-enabled tools to enhance efficiency, varies significantly, revealing a critical challenge in AI governance.
- 23% of organizations say AI adoption is moving faster than governance efforts.
- 5% claim not to use AI - but may be underestimating their exposure, as AI is often embedded in third-party tools.
The gap between AI adoption and governance oversight presents a major risk, increasing exposure to compliance violations, ethical concerns, and security vulnerabilities. Without proper oversight, these organizations risk unknowingly operating in an AI-driven environment without the necessary governance structures in place.
Investment in AI Governance Paints a Fuzzy Picture
Data shows that investment levels in AI Governance were inconsistent.
- 29% of organizations have increased AI governance investment in the past year, but 38% report no change and 30% have no funding at all.
While investment is trending upward, many organizations are still in the early stages of maturity and require continued investment and senior management support. Compliance teams must ensure that funding is allocated effectively toward structured policies, risk assessments, third-party governance, and employee training.
Third Parties are Still a Blind Spot in AI Risk Management
Data shown in GAN Integrity and Compliance Week’s survey indicate that most organizations lack full visibility into their third parties’ use of AI.
- Only 6% of organizations have full visibility into how their third-party vendors use AI.
This lack of clarity presents a significant risk, as AI-driven processes and decision-making are increasingly embedded in third-party products and services.
Without a clear understanding of where and how AI is used, organizations face potential compliance, security, and ethical risks to their third-party risk management programs. This includes exposure to data privacy and intellectual property security violations, and regulatory non-compliance.
Organizations must strengthen AI-specific due diligence to avoid exposure to data privacy issues, intellectual property risks, and regulatory non-compliance.
Final Takeaways
While many organizations are still in the early stages of AI governance, the growing gap between AI adoption and governance underscores the need for proactive, structured strategies. Compliance leaders must take the lead in bridging this gap, ensuring that AI systems are transparent, ethical, and aligned with regulatory expectations.
By prioritizing investment, employee training, and third-party risk management, organizations can navigate the complexities of AI governance and thrive in an AI-driven future. The decisions made today will define not only an organization’s risk posture but also its reputation and ability to innovate responsibly.
Interested in learning more? Check out our full AI Governance Survey and Guide!
Hannah Tichansky is the Content and Social Media Manager at GAN Integrity. Hannah holds over 13 years of writing and marketing experience, with 8 years of specialization in the risk management, supply chain, and ESG industries. Hannah holds an MA from Monmouth University and a Certificate in Product Marketing from Cornell University.