
AI Adoption
AI in Financial Controls: Managing Risk in Probabilistic Models
Executive Summary
ECLC members convened to examine one of the most consequential questions facing finance and transformation leaders today: as AI models move from assistive tools to autonomous agents embedded in financial workflows, how do organizations govern, audit, and trust systems that operate in probabilities rather than absolutes?
The session surfaced a core tension between the binary nature of traditional financial controls, where a control either passes or fails, and the probabilistic reality of AI, where confidence intervals replace certainty. The discussion explored how organizations can navigate this shift without sacrificing auditability, regulatory compliance, or the trust of audit committees and risk officers.
Key themes included risk-tiered automation, the irreducible need for human judgment at critical decision points, explainability as a governance requirement, and the importance of integrating AI oversight into existing control frameworks rather than treating it as a parallel track.
This roundtable was held on February 13, 2026
Roundtable Participants
Led by Puneet Thakkar, Google - Finance Process and Systems Transformation
Aivy Dowds, Kenvue - Head of Transformation - Global Operations
Ana Coronel, Independent - Transformation Executive
Andrew Spector, Paramount - Senior Director, Change Management
Brian Hricik, Sherwin-Williams - Change Management Lead
Carol Shields, Community Health Network - Former Director of Performance Excellence
David Kaempf, FedEx (don't mention in ECLC briefs) - Enterprise Transformation Leader
David Stein, VML - SVP, Performance Marketing Operations
Ganesh Harke, Citi - Vice President
Heather Anthony, IMAX - SVP, Head of Enterprise Transformation
Isabelle Guembou, BAMSI - Treasurer, Board of Directors
Jackie Cazar, Moody's - SVP Process Excellence
Marjorie Etter, Meta - Global Training, Knowledge & Change Management Leader
Patty Toth, The Riverside Company - Head of Value Creation and Transformation
Prakash Reddy, Atlassian - Head of Data Engineering & AI Enablement
Sharon Daniels, BT Group - Transformation Change Management & Internal Communications
Sundeep Thusoo, REDE Consulting - Vice President - AI & Business Reinvention
Tusar Dash, Synnergie - VP - Strategy & Transformation
Wes Herzik, JP Morgan - Executive Director, AI Orchestration & Change Management
From Binary Controls to Probabilistic Risk Management
Traditional SOX controls operate on a pass/fail basis. A control either works or it does not. AI introduces a fundamentally different paradigm: models produce outputs with confidence intervals, not certainties. This shift challenges the assumption that financial controls can, or should, ever be 100% absolute.
Roundtable participants challenged that assumption directly. As one participant observed, current SOX procedures are also created by humans and carry their own failure rates. The difference is familiarity, not actual certainty. The relevant question is not whether AI is perfect, but whether it is sufficiently documented, tested, and understood for auditors to trust the output.
“The 100% absolute certainty we assume from current controls isn’t necessarily a guarantee either—humans aren’t perfect. You should be able to have the same review experience with AI that you have today with your auditors.”
— Heather Anthony, VP Enterprise Transformation, IMAX
The path forward, participants agreed, lies in documentation and validation: presenting audit committees with a clear record of how models were built, what data trained them, what testing was performed, and how confident the organization was in the model before deploying it. Transparency of process can substitute for the false certainty of legacy procedures.
Risk Segmentation: Where AI Belongs and Where It Doesn’t
Rather than treating AI adoption in financial controls as an all-or-nothing proposition, participants coalesced around a risk-segmentation framework: the type of transaction, not just the accuracy of the model, should determine the level of automation permitted.
The practical formulation offered during the session drew a clear line between high-volume, low-impact processes—where probabilistic AI adds genuine value—and high-dollar, high-risk decisions that warrant human review regardless of model accuracy.
Examples cited included:
• Three-way invoice matching for routine supplier payments, where AI coverage at scale can dramatically reduce leakage and error rates
• Travel and expense processing, where probabilistic approval of the vast majority of submissions is appropriate
• Treasury transfers, large capital expenditure approvals, and general ledger entries at period close, where deterministic controls and human sign-off remain essential
The headline insight reframes how leaders should think about coverage: 99% confidence across 100% of transactions is meaningfully better than 100% certainty applied to a 5% sample. Breadth of coverage, not the illusion of perfection, is the real value proposition of AI in financial controls.
“The strategy isn’t AI for everything. It’s AI for the noise, humans for the signal. We have to teach our auditors that an AI model providing 99% confidence across 100% of the transaction volume provides infinitely better risk mitigation than a human with 100% confidence checking a 5% random sample.”
— Puneet Thakkar, Finance Transformation Leader, Google
The Human Firewall: Where Automation Must Stop
A recurring question throughout the session was where to draw a hard line—processes that must never be fully automated, regardless of model accuracy or performance.
The consensus was that any decision with significant regulatory, reputational, or financial consequence requires a human decision point. Participants framed this not as distrust of AI, but as a design principle: autonomous systems should be built to pause and escalate, not to proceed unilaterally when stakes are high.
“Human oversight is mandatory for critical decisions. Agentic AI in financial controls must function as a hybrid socio-technical system, leveraging probabilistic automation for scale while enforcing human-in-the-loop oversight for high-stakes processes. The system must autonomously halt, explain its logic, and require explicit human approval before executing any financially or regulatorily significant actionn.”
— Ganesh Harke, Vice President, Citi
The practical design pattern that emerged pairs full automation with a structured exception model: automate approximately 90% of routine processing, while routing the highest-risk 10%—new vendors, unusually large transactions, anomalous patterns—to human review queues. This hybrid model was described as a “human firewall”: not a barrier to progress, but a deliberate checkpoint that preserves accountability where it matters most.
The example of fraudulent invoices paid at scale by large technology companies illustrated the stakes: without sufficient human review at the exception layer, even sophisticated organizations absorb preventable losses in the millions.
Explainability and the Black Box Problem
For AI to be auditable, it must be explainable. This requirement—sometimes formalized as Explainable AI (XAI)—emerged as a non-negotiable governance principle throughout the session.
Auditors need to be able to trace how an AI model arrived at a conclusion, reproduce that derivation, and understand what data informed it. When a model cannot provide that chain of reasoning, it creates the same problem as an employee who cannot explain how they reached a financial conclusion: a breakdown of accountability.
Participants noted that this is also increasingly a regulatory requirement. The EU AI Act establishes transparency and traceability obligations for high-risk AI applications, and leading organizations are already mapping their AI use cases against that framework.
Treating explainability as a governance requirement—not a nice-to-have—shapes both model selection and how agentic systems are designed to log and report their reasoning.
Practical measures discussed included maintaining change management logs for AI models (analogous to change logs required for ERP systems), documenting training data and version history, and designing agents to surface their reasoning alongside their outputs.
Integrating AI Governance into Existing Control Frameworks
One of the session’s clearest takeaways was the risk of treating AI governance as a parallel, standalone compliance function. When AI oversight operates separately from the internal control environment, blind spots accumulate. When it is integrated, accountability is clearer and controls are stronger.
Participants described a federated model as most effective: a centralized compliance and risk function sets policies and standards, while individual teams that deploy AI use cases assume ownership of compliance within those guardrails. This distributes accountability without fragmenting governance.
At scale, this requires infrastructure: a central repository of AI agents and use cases, intake processes for production data access, and mechanisms for identifying duplication or shadow AI—the AI-era equivalent of shadow IT. Without these systems, organizations are likely to accumulate thousands of redundant, ungoverned agents that create data exposure and compliance risk.
“If AI governance is treated as a siloed function separate from internal audit, you automatically create regulatory blind spots. AI must be integrated directly into existing IT General Controls—mandating strict change management logs, training data documentation, and the exact same rigorous oversight we apply to core ERP systems.”
— Puneet Thakkar, Finance Transformation Leader, Google
The change management dimension of this integration is significant. Employees whose roles are being restructured around AI agents need not just training, but genuine leadership commitment and a clear articulation of how their expertise remains central.
Building a “bilingual workforce”—fluent in both their domain and the AI tools augmenting it—requires sustained investment, not a one-time rollout.
Key Takeaways
The shift from binary to probabilistic controls requires a deliberate governance architecture, not just better models.
• Risk-tiered automation is the operating principle. Match the level of human oversight to the risk profile of the transaction, not to a blanket policy about AI.
• Explainability is a governance requirement. If a model cannot account for its reasoning, it cannot be audited. This shapes which tools are selected and how they are deployed.
• Human-in-the-loop is a design feature, not a limitation. Build agentic systems to pause and escalate on high-risk decisions rather than proceeding autonomously.
• Integrate AI governance into existing control frameworks. A separate AI compliance track creates blind spots; embedding AI oversight into IT general controls and internal audit preserves accountability.
• Documentation is the bridge to auditor comfort. Confidence in AI-assisted controls comes from transparency of process—how models were built, tested, and validated—not from claiming equivalence to legacy procedures.
• Change leadership is essential. The human dimension of this transformation—roles changing, expertise being redefined, trust needing to be built—requires the same deliberate attention as the technical implementation.
AI in financial controls is not a question of whether to automate, but how to automate responsibly. The organizations that get this right will combine the breadth of AI coverage with the depth of human judgment—and build the governance infrastructure to make both auditable.
The Executive Council for Leading Change
The Executive Council for Leading Change (ECLC) is a global organization that brings executives together to redefine the landscape of organizational change and transformation. Our council aims to advance strategic leadership expertise in the realm of corporate change by connecting visionary leaders. It's a place where leaders responsible for significant change initiatives can collaborate, plan, and create practical solutions for intricate challenges in leading large organizations through major shifts.
In a world where change is constant, we recognize its crucial role in driving business success. ECLC’s mission is to create a community where leaders can excel in guiding their organizations through these dynamic times.


