Use this when you need a repeatable operating process for AI cost instead of occasional panic.
The fast answer: run a 30-minute weekly review with one engineering owner and one product owner. Cover pacing, top deltas, anomalies, and one to three clear actions. Log decisions and owners.
What you will get in 10 minutes
- A copy-paste weekly agenda
- An owner matrix
- A simple decision log format
Use this when
- You have a budget and forecast but no regular review
- Cost spikes keep surprising you
- You want cost to influence product and engineering decisions in time to matter
- The team asks "who looks at this?"
Who joins and what they own
Keep it small:
| Role | Responsibility |
| --- | --- |
| Engineering owner | Explain deltas, own investigation and optimization follow-up |
| Product owner | Connect cost changes to launches, features, and user growth |
Optionally add finance or ops if spend is already material and forecast changes need immediate context. This is an operating review, not a status meeting.
Weekly review agenda
Use this 30-minute format. If it regularly runs longer, either the data is unclear or the agenda is too broad.
1. Pacing (5 min)
- Total spend last 7 days vs prior 7 days
- Month-to-date vs plan
- Forecast for month-end if pace continues
If forecast is more than 10 percent above plan, that is the first thing to explain.
2. Top deltas (10 min)
- Which provider increased or decreased the most?
- Which model or service?
- Which feature or workflow?
- Was the change expected?
The goal is to explain the delta, not just observe it. If you cannot explain it, that becomes an action.
3. Anomalies and incidents (5 min)
- Any material alerts since last review?
- Any spikes that need investigation?
- Any rollbacks or config changes that affected cost?
4. Actions and owners (10 min)
End with one to three actions. Each has an owner and a due date.
Examples:
- Investigate Bedrock increase — engineering, by Friday
- Reduce prompt length in summarization workflow — engineering, this sprint
- Check fallback routing for overfiring — engineering, by next review
- Revise forecast and share with CFO — product, by Tuesday
If there are no actions, record that changes were expected and move on.
What decisions get logged
A useful review produces decisions, not just discussion. Log:
- Leave as-is (change expected)
- Investigate (owner + due date)
- Optimize (owner + tactic + due date)
- Revise forecast (owner + new number + communication plan)
- Escalate (who, why, by when)
Escalation triggers
When to escalate beyond the weekly review:
- Forecast is 20 percent or more above plan with no clear fix
- A spike is unexplained after one investigation cycle
- A provider or model change affects a material customer or feature
- Spend growth does not match product or user growth and margin is at risk
Monthly follow-up rhythm
The weekly review is for pace and action. Add a monthly layer for:
- Budget vs actual reconciliation
- Provider and model mix changes
- Architecture or optimization backlog prioritization
- Forecast accuracy review and model updates
Copyable runbook
Use this structure each week:
Weekly AI FinOps Review
- Date: [YYYY-MM-DD]
- Period: Last 7 days
- Attendees: [names]
- Total spend: $[X] vs prior week [+/-%]
- Forecast for month: $[X] vs plan $[Y]
- Top deltas: [provider / model / feature, reason]
- Anomalies: [none / list with status]
- Actions:
- [Action] — [Owner] — [Due date]
- Next review: [date]
How StackSpend helps
StackSpend gives the review a shared surface:
- yesterday vs forecast
- top drivers by provider, model, service, category
- anomaly follow-up with drill-down
- one view instead of multiple provider dashboards
See cloud + AI cost monitoring for the workflow.