Stop pitching “AI”. Start pitching outcomes: the AI business case that gets funded and delivers real change
Most AI initiatives don’t fail to gain approval because the tech doesn’t work. They fail because the business case is written like a vision deck — not like a funding request. Further down the road, most implementations also don’t fail because the tech doesn’t work; they fail because they don’t deliver benefit.
GrowCFO has spent many years working with finance leaders on building business cases for major business change projects, particularly focusing on benefits realisation planning. The lessons for today’s technology are no different from those of previous transformation waves. If finance teams want budget approval, they need to make the case in the language every executive team already understands:
cash, risk, capacity, and measurable outcomes, along with clear plans to deliver those outcomes.
The CFO funding logic (the bit that changes approvals)
When boards and CEOs say “show me ROI”, they rarely mean “estimate productivity gains”.
They mean:
- Will this reduce risk or errors we pay for today
- Will this improve cash and working capital?
- Will this create real capacity we can redeploy (or avoid hiring)?
- Will it protect or increase margin?
- Can we prove it quickly, and scale responsibly?
So an AI business case needs to be built like a finance investment case, not an innovation narrative.
The 4 ROI levers that consistently get funded
Use these as the “approved language” framework. Most successful cases use 2+ levers, not just one.
1) Time saved → capacity created (or hiring avoided)
Not “hours saved”. FTE capacity created.
- Example: touch time per invoice, reconciliation hours, forecast cycle time.
- Translate to: “We create 0.6 FTE capacity in AP and redeploy it to supplier disputes / controls / analytics.”
Hours saved rarely result in headcount reductions. A 0.6 FTE improvement doesn’t mean one part-time role can be removed from the AP team. The 0.6 FTE is usually spread across the entire team, making it difficult to restructure and convert fractional gains into a reduction in the wage bill.
Hours saved will create additional capacity within the team, but it’s important to deliberately identify what will be done with this extra capacity. Finance teams should try to quantify the benefit that this capacity will bring, for example what fewer supplier disputes might mean for the business.
Rule of thumb:
Time savings only land if the redeployment (or headcount avoidance) is clearly defined.
2) Error reduction → fewer losses, rework, and audit pain
Errors and rework are often a significant hidden cost, taking up a disproportionate amount of the team’s time. A simple activity analysis of what people are doing can quickly highlight the opportunity. Each team member can estimate the time spent correcting errors and rework over a typical month. If the team spends 20% of its time on rework, it becomes easy to put a cost on this.
This is often the quickest ROI lever to validate.
- Example: quantify historically duplicate payments, mis-postings, revenue leakage, billing errors, recurring journal mistakes, and compliance breaches.
- Estimate the time these consumed.
- Translate to: “Reduce rework by X% and audit adjustments by Y; reduce exception backlog.”
Bonus: error reduction usually also increases speed (close and collections).
3) Working capital → cash impact (the CFO favourite)
Will the AI investment have a direct impact on cash flow? If so, this is where ROI becomes undeniable.
- Example: collections prioritisation, dispute turnaround, invoice accuracy, billing cadence, approvals speed.
- Translate to: “Improve DSO by 2–5 days” or “reduce overdues >60 days by X%”.
Even a small DSO improvement often outweighs a large “hours saved” claim in approval conversations.
4) Margin lift → pricing, leakage, renewals, and procurement
The best “growth” cases don’t feel like growth bets — they feel like margin protection.
- Example: contract-to-cash accuracy, renewals risk, price compliance, procurement leakage.
- Translate to: “Reduce discount leakage; prevent missed renewals; improve price realisation.”
ROI frameworks by use-case
These principles can be applied to AI investment across the finance technology stack, and the benefits can often be proven reasonably quickly. Here are a few examples:
A) Finance close (recs + flux + narrative automation)
Baseline: close days, hours per close, number of late adjustments, number of reconciliation exceptions
ROI: faster close + fewer audit adjustments + reduced rework
Proof in 30–60 days: run it for 1–2 entities or one close cycle; measure exceptions and narrative production time.
B) AP automation (invoice capture + coding + matching + exceptions)
Baseline: touch time per invoice, exception rate, cost per invoice, duplicate or incorrect payments
ROI: capacity + error reduction
Proof in 30–90 days: pilot on one supplier group or one entity; compare exception rate and touch time.
C) AR / collections (prioritisation + dispute triage + next best action)
Baseline: DSO, CEI, % current, dispute turnaround time, collector capacity
ROI: working capital + capacity
Proof in 30–60 days: run AI prioritisation for a segment; compare DSO trend, dispute cycle time, and cash collected per collector hour.
D) FP&A forecasting (drivers + scenarios + narrative packs)
Baseline: forecast cycle time, number of iterations, forecast accuracy, time to produce narrative packs, ability to generate multiple scenarios.
ROI: capacity + decision speed + fewer surprises
Proof in 60–90 days: test on one business line; measure cycle time reduction, forecast accuracy, and scenario coverage.
E) Procurement / spend controls (intake + policy guidance + approvals)
Baseline: approval cycle time, % maverick spend, compliance to policy, renewal leakage
ROI: margin + risk control
Proof in 30–90 days: implement intake for one category; measure cycle time and leakage reduction.
How to baseline (without turning it into a 6-month analytics project)
You only need three numbers to start:
- Volume (invoices per month, collections cases, journals, reconciliation items)
- Touch time (minutes per item or hours per cycle)
- Exception cost (rework rate, error rate, leakage estimate)
These numbers don’t need to be 100% accurate. The aim is to create reasonable estimates, not scientific measurements.
Then add one finance KPI tied to the chosen ROI lever:
- Close days / audit adjustments (close)
- Cost per invoice / duplicate payments (AP)
- DSO / overdues / dispute time (AR)
- Forecast cycle time / accuracy (FP&A)
- Maverick spend / renewal leakage (procurement)
How to prove impact in 30–90 days (the “CFO pilot that gets renewed”)
Keep the pilot as simple as possible. The scope should be clearly defined with a clear start and finish. Most pilots fail because they’re too broad.
The proof plan should include:
- One workflow
- One segment (entity, region, supplier group, customer tier)
- Clear “before vs after” measures
- Controls baked in (human-in-the-loop, logging, approvals)
A simple 90-day proof plan
- Weeks 1–2: baseline + access + success metrics
- Weeks 3–6: configure + integrate + controls + UAT
- Weeks 7–10: live pilot + weekly measurement
- Weeks 11–13: ROI summary + scale plan + governance pack
Decision gate at day 30: “Is this trending to ROI?”
Decision gate at day 90: “Scale, modify, or stop.”
What execs will challenge (so answer it upfront)
Every business case will be challenged. It helps to anticipate the questions likely to be asked. This usually means understanding the audience and their priorities.
If these challenges are addressed early in the business case, approvals become much easier:
- What will we stop doing? (redeploy capacity explicitly)
- Where’s the evidence trail? (logging + auditability)
- What’s the failure mode? (exceptions routing, manual override)
- What data is exposed? (classification + access)
- What’s the total cost to run this? (licenses + integration + change)
- How does it scale across entities and processes?