Use this when you need to know where the obvious gaps are in your AI cost setup before something breaks.
The fast answer: run a short pass over tracking, monitoring, attribution, and optimization hygiene. Turn the results into a first-five-fixes list instead of a long to-do.
What you will get in 10 minutes
- A pass/fail audit across the main cost control areas
- A list of the first five fixes worth doing this week
- Clarity on whether you are ready for budget, forecast, and weekly review
Use this when
- You are building a budget or forecast for the first time
- You suspect hidden cost leaks
- You want to know what to fix before scaling usage
- A recent spike made you wonder what else you are not seeing
Tracking and attribution checks
| Check | Pass | Fail | First fix if fail |
| --- | --- | --- | --- |
| Provider spend is pulled daily | We have automated daily sync | Manual or weekly only | Set up daily ingestion for material providers |
| Spend is broken by model or service | Yes, by provider and model | Provider total only | Add model or service dimension to cost data |
| Feature or workflow ownership exists | Yes, major features tagged | No attribution | Define feature keys and map costs to them |
| Staging and prod are separated | Yes, environment dimension | Mixed or unknown | Add environment filter and exclude non-prod from reviews |
| Cost per request is measurable | Yes, for main workflows | No | Instrument at least top 3 AI workflows |
Monitoring and alerting checks
| Check | Pass | Fail | First fix if fail |
| --- | --- | --- | --- |
| Daily spend is visible | Yes, same day | Delayed or manual | Get a daily spend signal before noon |
| Forecast vs budget is tracked | Yes, updated daily | Not tracked | Add forecast and compare to budget |
| Alerts exist for material providers | Yes, anomaly or threshold | No alerts | Add daily anomaly alert for largest provider |
| Alerts have an owner | Yes, named owner | No owner | Assign owner for each alert type |
| Alerts are actionable | Yes, link to dashboard or runbook | Vague or noisy | Add context: provider, model, delta, link |
Optimization hygiene checks
| Check | Pass | Fail | First fix if fail |
| --- | --- | --- | --- |
| Top 3 cost drivers are known | Yes, documented | Unknown | Run one week of data and document top 3 |
| Model mix is reviewed monthly | Yes | No | Add model mix to monthly review agenda |
| Prompt length is monitored | Yes, for main workflows | Not monitored | Track input tokens per request for top workflow |
| Retries and failures are visible | Yes | No | Add outcome dimension (success, retry, failure) |
| Background jobs are separated from user traffic | Yes | Mixed | Tag background jobs and review separately |
Team process checks
| Check | Pass | Fail | First fix if fail |
| --- | --- | --- | --- |
| Someone owns cost review | Yes, named | No owner | Assign a weekly review owner |
| Review cadence exists | Weekly or biweekly | Ad hoc only | Schedule a recurring 20-minute review |
| Actions from review are logged | Yes | No | Add action log to review template |
| Budget and forecast are shared | Yes, with stakeholders | Internal only or missing | Share forecast with at least one stakeholder |
First five fixes
Do not try to fix everything. Pick the five that matter most this month.
A practical order:
- Daily spend visibility — If you do not see today's burn by noon, fix that first.
- Top cost driver documentation — Know your top 3 providers, models, or features.
- At least one alert — Add a daily anomaly alert for your largest AI provider.
- Feature or workflow attribution — Map spend to at least your top 3 AI features.
- A weekly review cadence — Schedule a 20-minute review with a named owner.
StackSpend helps teams pass these checks by providing daily multi-provider visibility, category and model breakdowns, forecast vs budget tracking, and anomaly alerts in one workflow. See AI cost monitoring for the operating layer.