Governance that enables speed (not “AI police”)
GrowCFO has run 3 webinars so far this year, each looking at different aspects of AI. Implementing AI agents has been a common theme. In each webinar, there were lots of questions about both governance and security in an AI environment. These varied from “Can I trust AI with sensitive business data?” to “If an AI agent takes over this process, how do we ensure it gets it right?” Both are very valid questions, and are among the factors slowing finance teams’ adoption of AI agents.
It’s a particularly important area to address as GrowCFO focuses on AI-native as this quarter’s tech theme, and something that will be explored in more depth in The Tech Innovation Report scheduled for release at the start of March. However, it’s worth taking an early look in this week’s newsletter.
If AI is going to move the needle in finance, governance is essential. But not the kind that shows up late, says “no”, and slows everything down. The job is simpler (and more useful) than most people make it:
Build controls that let you move fast — safely — and repeatedly.
Why governance is suddenly a growth lever for finance
AI-native tools don’t just “analyse”. They take actions:
drafting journals, chasing documents, raising tickets, proposing approvals, triggering workflows.
That’s brilliant… until you realise the control environment was built for:
- humans doing discrete tasks
- systems executing predefined rules
- audit trails that assume “the user clicked the button”
Agents blur those boundaries. So governance needs to evolve from policy to operating system.
The Governance Stack: 7 controls that unlock speed
1) Risk tiers (so everything isn’t treated like a bank transfer)
Start by categorising AI use cases into tiers. For example:
Tier 1 — Assistive (low risk)
Summaries, drafting narratives, extracting invoice fields, suggested insights.
Tier 2 — Advisory (medium risk)
Proposed journals, suggested accruals, exception triage, “recommended approvals”.
Tier 3 — Action-taking (higher risk)
Posting journals, sending supplier/customer communications, executing payments, changing master data.
Rule of thumb:
If it changes a ledger, changes master data, or communicates externally — it’s a higher tier.
2) Human-in-the-loop rules (where accountability must remain)
This is where CFOs and CISOs align quickly.
Define what needs human sign-off, and what can be automated with guardrails.
Examples:
- Always human-approved: payments, vendor creation, bank detail changes, final journal posting
- Conditional human approval: journals above a value threshold, unusual patterns, new counterparties
- Automated: reminders, document chasing, classification suggestions, routing and triage
The aim isn’t to keep humans busy.
It’s to keep humans accountable at the right moments.
3) Data classification (so the tool knows what it’s allowed to see)
Most AI risk is actually data risk. Finance should align with the organisation’s data classification scheme and make it practical:
- What data can be used for model context?
- What data can leave the boundary (e.g., external SaaS)?
- What data can be retained and for how long?
- What data must be masked/redacted?
If this is done well, the conversation shifts from “Is AI safe?” to deciding which data is safe for which workflows.
4) Logging & traceability (because “trust me” isn’t an audit trail)
AI output without traceability becomes a control problem fast.
Minimum viable logging for finance agents:
- what input data was used
- which rules/thresholds were applied
- what action was proposed/taken
- who approved it (if required)
- what changed in the system of record
- timestamps, user identity, and exception reasons
If the “why” can’t be reconstructed later, audits, incidents, or disputes become much harder to manage.
5) Approvals that scale (thresholds, not bottlenecks)
Approvals should be designed like a scalable system:
- thresholds by value/materiality
- exception routing (only the unusual items escalate)
- separation between “prepare” and “approve”
- approval SLAs (so automation doesn’t get stuck)
The goal: more automation + fewer approvals overall, because exceptions are better filtered.
6) Segregation of duties (SoD) for agents
If an agent can:
- create a vendor
- approve a PO
- post a journal
- and trigger a payment
…then a super-user has effectively been created with no natural controls.
Even if humans are “in the loop”, the design matters:
- keep “prepare” and “approve” roles separate
- restrict what the agent can do by tier
- use service accounts deliberately (with permissions that would stand up to audit scrutiny)
7) Vendor due diligence (fast, repeatable, CFO-grade)
The CISO doesn’t want another bespoke questionnaire. The CFO doesn’t want an unmanageable risk register.
Make it a repeatable, finance-friendly checklist:
- security posture (ISO27001/SOC2, pen tests)
- data boundaries (what’s stored, where, for how long)
- model usage (training on customer data? opt-out?)
- incident response and breach notifications
- access controls (SSO, MFA, RBAC)
- audit logs and exportability
- contractual clauses (sub-processors, termination, data deletion)
This is governance as enablement, not red tape.
The CFO mindset shift
Governance isn’t “risk management” as a separate workstream. In an AI-native finance function, governance is the way to scale:
- scale adoption without losing control
- scale automation without increasing audit risk
- scale speed without increasing exposure
March is “AI-native CFO month” at GrowCFO
If this topic is on your 2026 agenda, keep an eye out for two things in March:
- The GrowCFO Tech Innovation Report (AI-native + agentic solutions for the Office of the CFO)
- The GrowCFO Tech Showcase event, where the tools will be demonstrated in action and what “good” looks like in the real world will be unpacked