FinOps Maturity Assessment: A Complete Framework
How to assess your FinOps program maturity — covering cost visibility, budget governance, allocation accuracy, tooling completeness, and accountability structure using the CFCMM.
Most companies think their FinOps program is more mature than it is. The gap between “we have Kubecost” and “our FinOps program is functioning correctly” is where millions in recoverable waste live.
This post explains how to assess FinOps maturity systematically — using the same framework we use in every finops.qa engagement.
Why Maturity Assessments Fail
The FinOps Foundation’s maturity model (Crawl/Walk/Run) is widely referenced and broadly misapplied. Most teams self-assess at Walk when they should be at Crawl. The problem: the model is subjective and qualitative. There are no pass/fail criteria.
The Cloud Financial Controls Maturity Model (CFCMM) was designed to fix this. It evaluates 42 objective controls across six domains, producing a score rather than a label.
The Six Domains
1. Cost Visibility (8 controls)
Can you see your cloud costs accurately?
Key tests: Does your cost dashboard reconcile with your provider invoice within ±2%? Does every team have a cost view? Can you attribute costs to product lines and customers?
Level 2 threshold: Dashboard exists. No accuracy validation. Level 3 threshold: Dashboard validated against invoice. Reconciliation gap <5%. Level 4 threshold: Real-time dashboard. Reconciliation gap <2%. Customer-level attribution live.
2. Budget Governance (7 controls)
Do your budget alerts actually work?
This is the domain where most teams are surprised. They have alerts configured — but when we test them against historical data, 40–60% either don’t fire, fire to the wrong owner, or fire too late to act.
Key tests: Alert threshold accuracy, routing to current owner, latency (time from overrun to notification), anomaly detection signal-to-noise ratio.
Level 3 threshold: Alerts configured and tested. Owner list validated in last 90 days. Latency <1 hour. Level 4 threshold: Alert thresholds adjusted based on historical data. P95 latency <15 min. Anomaly detection false positive rate <20%.
3. Allocation Accuracy (9 controls)
Does your cost attribution actually match your invoices?
Allocation accuracy is the most commonly overstated dimension. Resource-count tag coverage of 80% typically represents 55–70% spend-weighted coverage — meaning 30–45% of spend has no valid cost owner.
Key tests: Spend-weighted tagging coverage, allocation model reconciliation against invoice, shared resource attribution methodology.
Level 3 threshold: Spend-weighted coverage >80%. Allocation model reconciliation <5% variance. Level 4 threshold: Spend-weighted coverage >95%. Allocation model reconciliation <2% variance. Chargeback live.
4. Optimisation Cadence (6 controls)
Are you acting on optimisation recommendations?
Having rightsizing recommendations is not optimisation. Acting on them within 30 days, validating the results, and preventing regression is.
Key tests: Age of outstanding rightsizing recommendations, implementation rate, regression detection.
Level 3 threshold: Rightsizing recommendations <30 days old. Implementation rate >60%. Level 4 threshold: VPA or Karpenter auto-rightsizing live. Regression detection automated.
5. Tooling Completeness (7 controls)
Is your FinOps tool actually accurate?
We test FinOps tool output against provider invoices across 12 dimensions. Most tools fail 3–5 dimensions on first assessment — typically shared resource attribution, commitment discount allocation, and cross-region egress.
Level 3 threshold: Tool accuracy tested. Reconciliation gap <5% across all 12 dimensions. Level 4 threshold: Tool accuracy <2% variance. FOCUS-compliant data model in place.
6. Accountability Structure (5 controls)
Does someone own every dollar?
Budget owner assignment, defined FinOps role, escalation path for P0 defects, and board-level FinOps reporting.
Level 3 threshold: Every cost centre has an assigned owner. FinOps function defined (even if part-time). Level 4 threshold: SLA for P0 defect response. FinOps Defect Score in board reporting.
How to Use This Framework
- Score each domain 1–5 using the control checklist
- Identify your lowest-scoring domain — that is your highest-leverage improvement area
- Set a target level for each domain over the next 90 days
- Re-assess quarterly and track your FinOps Defect Score trend
The CFCMM assessment is available as part of finops.qa’s FinOps QA Assessment — a 5-day engagement that produces your baseline score and a domain-by-domain remediation roadmap.
Read the full CFCMM framework: Cloud Financial Controls Maturity Model.
Get Your FinOps Defect Score
Book a free 30-minute cloud cost review. We will identify your top three FinOps gaps and give you a preliminary Defect Score — no pitch, no obligation.
Talk to an Expert