The new allies of GRC professionals: AI agents
Written by: Felix Baart
Chief financial officers are increasingly confronted by the financial burden of new regulations. Each mandate, however well-intentioned, often triggers unplanned expenditures on system overhauls, process redesigns, and increased staffing – resources pulled directly from potential growth and innovation initiatives. This pressure impacts profitability and can even reduce revenue as companies wrestle with compliance complexities. A driver of these costs lies within the detailed, rule-bound, and context-heavy domain of governance, risk, and compliance (GRC).
But what if this reliance on specific rules, rigorous procedures, and profound contextual understanding is what makes GRC suitable for AI transformation, particularly through emerging AI agents?
These AI agents offer CFOs a powerful lever to evolve beyond traditional passive financial controls, such as periodic audits and SOX compliance, towards 'active governance’. Imagine real-time compliance frameworks, continuous automated audits, and intelligent anomaly detection seamlessly integrated into daily financial workflows. This paradigm shift, enabled by AI, transforms governance from a quarterly obligation into an ongoing operational asset. Such active governance fortifies risk management and builds trust with stakeholders, and offers a tangible way to mitigate the burdensome costs of compliance. Organizations aiming to harness this potential should consider AI agents as colleagues, a perspective explored in our related piece: They’re employees, not endpoints: A labor-law playbook for managing digital colleagues
“AI agents are positioned to recalibrate GRC, shifting human effort from laborious data collection and routine checks towards higher-value strategic analysis and critical decision-making”
- Felix Baart
GRC and AI agents: a partnership forged in silicon
GRC work hinges on two elements where AI is rapidly becoming indispensable:
Rules, rules, rules: A vast portion of GRC revolves around adhering to predefined rules, be they from regulators, industry standards, or internal policies. AI models can be trained to digest and apply these rules with a consistency and speed that humans, juggling myriad tasks, often can't match.
Context is king: It's rarely just about checking a box. True GRC requires understanding which regulation applies, how a specific risk impacts a business unit, or what evidence truly satisfies an auditor. Modern large language models (LLMs), with their expanding ability to process and comprehend vast datasets, are becoming adept at grasping this crucial context.
Enter AI agents. Don't think of these as generic chatbots. These are specialized AI tools, digital colleagues designed and programmed to execute specific GRC tasks, learning the rules and context from the data they access. Explore this in more detail in our article: Designing the AI-native enterprise: protocols, digital colleagues, and the new stack.
What can GRC digital colleagues actually do?
Instead of hypotheticals, let's envision these AI agents in action:
The evidence retriever: Imagine an agent automatically tapping into company systems (HR platforms, IT logs, cloud services) to gather the necessary proof that controls are operating as intended. Routine checks no longer require manual screenshotting or log file hunts.
The regulatory scout: This agent constantly scans regulatory feeds and news, flagging relevant changes (cutting through the noise), identifying impacted business areas, and highlighting internal policies or controls needing updates. Less time wading through legalese, more time for focused action.
The knowledge navigator: An AI super-librarian for all your policies, regulations, and procedures. Ask it a plain-English question like, "Show me all controls related to data residency for EU customers," and get an instant, accurate answer. No more spelunking through folders.
The first-draft specialist: Staring at a blank page for audit findings or a risk description? A generative AI agent can produce a solid first draft based on available data and standard templates, allowing human experts to focus on refinement, verification, and adding critical nuances.
The proactive insight miner: By analyzing patterns in control test results, incident reports, or risk data over time, an AI agent can flag emerging risk hotspots or recurring compliance weaknesses that might otherwise go unnoticed in the daily operational whirlwind.
Setting up and measuring how well agents perform becomes vital after first setup. Explore this in more detail in “When the agent takes over: Measuring enterprise AI by work owned, not math done”
Transforming audits: from drudgery to strategic insight
AI agents can fundamentally reshape internal and external audit processes:
Smarter planning:
Scope refinement: Agents can analyze past review data, compare it to current plans, automatically spot gaps, and suggest a precise list of materials for analysis.
Blueprint drafting: Get a running start with AI-drafted initial planning documents, outlining scope and objectives for stakeholder discussion. All based on your templates and style.
Streamlined execution:
Rapid policy verification: Instantly scan company policies to confirm coverage of specific regulatory requirements – turning days of manual reading into minutes.
Clerical work annihilation: Transcribe interviews and automatically populate standardized working papers, freeing up valuable human time (especially for junior team members) from tedious data entry.
Enhanced sampling: Analyze evidence samples, flag potential deviations for human review, and potentially enable more comprehensive checks for greater assurance.
Efficient reporting:
Automated debrief drafting: Generate initial report drafts by synthesizing information from underlying work papers and findings.
Tone consistency: Ensure reports align with the company’s established communication style.
Preliminary sanity check: Act as an initial reviewer, assessing the logical coherence of conclusions based on evidence and providing feedback for human consideration.
The upside: more brain, less slog
The potential here is transformative. Automating these tasks promises significant efficiency gains – work gets done faster, freeing up skilled GRC professionals from monotonous chores. Accuracy improves through the consistent application of rules, reducing human error. Critically, as AI drives down the cost of GRC checks, reviews and audits can shift from sampling to continuous evaluation of entire data populations, delivering a far higher level of assurance. By shouldering the data-heavy lifting and routine verifications, AI provides better, faster insights, empowering human experts to dedicate their intellect to strategic thinking, complex problem-solving, and nuanced judgment. And yes, streamlining processes and sidestepping costly compliance failures can lead to substantial savings.
Furthermore, consider the current business landscape where operational costs are rising. This is largely driven by a surge in regulatory demands, the intricate controls that must consequently be performed, and the critical need to secure vast amounts of data. For instance, regulations like the general data protection regulation (GDPR) have mandated stringent data handling processes, requiring investments in data mapping, consent management, robust security measures, and dedicated personnel like data protection officers (DPOs), not to mention the resources needed to manage data subject access requests and conduct impact assessments. According to the Centre for Economic Policy Research, companies have experienced a 2% revenue reduction and 8% profit decrease, on average [1]. PWC reports that 88% of global companies spend more than $1 million on GDPR compliance yearly and 40% exceed $10 million [2].
Similarly, anti-money laundering and counter-terrorist financing (AML/CTF) laws impose substantial burdens, compelling financial institutions and other obligated entities to implement rigorous know your customer (KYC) procedures, continuous transaction monitoring, complex investigations, and extensive reporting, all demanding significant technological and human capital. The annual global costs for AML/CTF was estimated to $206.1 billion, equivalent to approximately a third of Sweden’s GDP. More recently, the digital operational resilience act (DORA) in the EU is placing new, exacting requirements on the financial sector and their critical ICT providers to manage digital risks, involving comprehensive ICT risk management frameworks, advanced resilience testing, detailed third-party risk management, and meticulous incident reporting.
These are examples illustrating a broader trend of increased compliance overhead related to data governance, security, and operational integrity. In this environment, the timing of intelligence becoming a more accessible, commodity-like asset through LLMs is particularly opportune. AI advancements offer a powerful means to counter the escalating costs businesses face by automating and augmenting many compliance-related tasks. Beyond merely decreasing costs for large corporations, this technological shift also democratizes access to sophisticated compliance and operational tools, potentially lowering barriers to entry and enabling smaller players to compete more effectively in markets previously dominated by those with greater resources to absorb such regulatory burdens.
But let's not get ahead of ourselves: the inevitable caveats
Powerful technology always comes with challenges. Deploying AI agents in the high-stakes world of GRC requires navigating significant hurdles:
The peril of errors: In GRC, a mistake isn't just an oops. It can mean hefty fines, security breaches, legal battles, and shredded reputations. If an AI agent errs, due to flawed data, programming glitches, or novel situations, the fallout can be severe. The "garbage in, garbage out" principle is a stark reality.
The "black box" dilemma: Understanding how an AI reached a specific conclusion can be difficult. This lack of transparency is a major concern for auditors and regulators who demand clear, auditable evidence trails.
The bias amplifier: AI learns from data. If that data reflects historical biases, the AI can perpetuate or even magnify them, leading to unfair flagging of certain transactions or vendors, for example.
Data privacy and security: These agents require access to potentially sensitive company data. Ensuring this access is secure, compliant with privacy laws (like GDPR), and ethically managed is non-negotiable.
Keeping humans accountable: Over-reliance is a clear danger. Blindly accepting AI output without critical human oversight is an invitation for disaster.
Six critical strategies to navigate AI unpredictability explores how to address key risks and deploy AI responsibly.
Starting smart: don't try to boil the ocean
Given the high stakes, a full-scale dive into autonomous AI decision-making in GRC is ill-advised. The preferred path is to begin with lower-risk, high-volume tasks where the AI functions more as a highly capable assistant than a final arbiter.
Consider the report generation example: an AI agent analyzes findings and drafts sections of an audit report. Crucially, a human GRC professional then reviews, edits, validates, and ultimately approves that output. The human remains firmly in control, leveraging the AI's speed while providing essential judgment and accountability. Pilot projects targeting specific pain points, like automating evidence collection for select controls or monitoring a narrow set of regulatory changes, are excellent ways to learn, iterate, and build confidence.
The future: GRC professionals, now with AI superpowers
AI agents are unlikely to render GRC professionals obsolete. Instead, like in many other domains, they are positioned to fundamentally alter the nature of GRC work. By taking over the routine, data-intensive tasks, AI will liberate human experts to concentrate on what they do best: strategic analysis, interpreting complex ambiguities, managing stakeholder relations, making critical judgment calls, and, importantly, overseeing the AI systems themselves.
The objective isn't automation for its own sake. It's about forging a GRC function that is more effective, more efficient, and, perhaps, a little less complex to manage. By thoughtfully harnessing the power of AI agents, starting pragmatically, and ensuring humans remain in the driver’s seat, organizations can transform GRC from a perceived cost center into an intelligent, forward-looking strategic asset.
After deploying your first proof-of-concepts, further developing, managing and maintaining your AI agents is necessary to harvest value from your investment in the future, as explored in “Beyond deployment: Embracing AI sustainment for lasting value”