Back to blog
Guides
March 6, 2026
By Andrew Day

How to Attribute AI Costs by Feature, Team, and Customer

A practical guide to making AI costs explainable. How developers and product teams should structure projects, workspaces, API keys, tags, and metadata to track spend by feature, team, and customer.

Share this post

Send it to someone managing cloud or AI spend.

LinkedInX

Most teams can tell you their total AI spend. Far fewer can tell you which feature, team, or customer caused it. That gap makes optimization harder, forecasting weaker, and product decisions slower.

If you want AI cost visibility that is actually useful, you need an attribution model before you need a dashboard. This guide walks through a practical structure that developers and product teams can implement without turning the system into accounting software.

Quick answer: what is the best way to attribute AI spend?

Use a layered approach:

  1. Provider-native grouping where available, such as OpenAI projects or Anthropic workspaces.
  2. Application metadata for feature, team, workflow, and customer identifiers.
  3. Cloud tags or labels for Bedrock, Vertex AI, or Azure OpenAI workloads.
  4. A reporting layer that joins provider usage with your own product metadata.

Do not rely on one layer alone. Provider grouping is helpful but rarely enough by itself.

Why do most attribution systems fail?

They usually fail for one of three reasons:

  • the provider bill is grouped differently from the application,
  • metadata is inconsistent or optional,
  • or the team waited until the bill got large before deciding how to track it.

The fix is to define attribution fields early and make them hard to skip.

What dimensions should you track?

At minimum, track these five dimensions for every meaningful AI request:

If you are not capturing all five, you can still start. But the long-term goal should be to make every expensive workflow explainable across those dimensions.

How should developers structure attribution in direct AI APIs?

OpenAI

OpenAI exposes organization-level usage and cost APIs and supports project-level structure. If your application spans multiple internal products or environments, projects are a strong first boundary.

Use OpenAI projects for:

  • environment separation,
  • major product areas,
  • or internal platform ownership.

Then add your own metadata for feature, team, and customer inside the app. Projects are useful, but they are usually too coarse for product decisions on their own.

Anthropic

Anthropic's Admin API supports usage and cost reporting grouped by dimensions such as workspace and API key. Workspaces are a strong primitive if you want organizational separation without building all grouping yourself.

Use Anthropic workspaces for:

  • team boundaries,
  • environment isolation,
  • or internal business-unit separation.

Then attach your own application-level fields for feature and customer.

How should teams attribute costs on managed cloud AI platforms?

AWS / Bedrock

If you run AI workloads through AWS, cost allocation tags are essential. AWS requires you to activate user-defined cost allocation tags before they appear in cost reporting tools. Without that step, the tags exist operationally but do not help billing analysis.

Use AWS tags for:

  • product or feature name,
  • team owner,
  • environment,
  • customer tier if relevant.

GCP / Vertex AI

Google Cloud Billing export to BigQuery includes labels in the billing data. That makes labels one of the cleanest paths to cost attribution if you are willing to query or process billing export data.

Use GCP labels for:

  • service or workload,
  • team,
  • environment,
  • customer segment or tenant grouping.

Azure / Azure OpenAI

Azure Cost Management supports grouping and filtering by tags, and Azure exports include cost data that can be analyzed outside the portal. Microsoft also documents billing tags and cost exports, which helps if you need more formal accounting or cross-team reporting.

Use Azure tags for:

  • deployment grouping,
  • environment,
  • feature ownership,
  • or department-level reporting.

What is the best hierarchy for AI cost attribution?

Here is a practical hierarchy that works well:

  1. Billing account / cloud subscription
  2. Provider project / workspace / tagged workload
  3. Application feature
  4. Customer or tenant
  5. Request-level diagnostics when needed

That hierarchy is simple enough to maintain and detailed enough to explain most surprises.

What should product managers ask engineering to instrument?

Ask for these fields in every significant AI request log or event:

  • provider
  • model
  • feature or endpoint
  • team owner
  • environment
  • customer or org id
  • input tokens
  • output tokens
  • request outcome

If those are present, you can answer most PM questions without a separate data project.

What is the most important implementation rule?

Do not make metadata optional for expensive workflows.

If engineers can skip feature or owner fields, they eventually will, especially on internal tools, migrations, and new product experiments. That leads to the worst category in every report: "unknown."

A falsifiable recommendation: if a workflow can spend more than a few hundred dollars per month, require attribution fields before launch.

How should you handle shared infrastructure?

Shared services are normal. The mistake is forcing false precision too early.

Use one of these approaches:

  • attribute shared platform costs to a platform owner bucket,
  • split them by request volume across downstream features,
  • or separate them from direct model spend in reporting.

The right answer depends on how the data will be used. For pricing decisions, customer allocation matters more. For internal ownership, feature and team allocation matters more.

What should you avoid?

  • Tagging everything with free-text values — normalize your dimensions.
  • Using customer names instead of IDs — names change; IDs do not.
  • Depending only on provider dashboards — they rarely know your feature model.
  • Trying to get perfect attribution immediately — start useful, then improve.

Good attribution is iterative. But it needs a consistent schema from the beginning.

A practical rollout plan

  1. Pick a stable set of attribution fields.
  2. Enforce them in application code for expensive workflows.
  3. Use provider-native structure where it exists.
  4. Turn on cloud tags, labels, and billing exports.
  5. Build reporting around provider + feature + team + customer.

That is enough to move from "we have a big bill" to "we know exactly what drove it."

Bottom line

The best AI cost attribution model combines provider structure with application metadata:

  • provider/project/workspace for coarse grouping,
  • feature/team/customer for decision-making,
  • and cloud tags or labels for managed AI workloads.

If you do only one thing this quarter, make feature and owner metadata mandatory on expensive AI paths.

FAQ

Is provider-native grouping enough?
Usually no. Projects, workspaces, and tags help, but they do not fully reflect feature ownership or customer usage inside your app.

Should I attribute by feature or by customer first?
If you are optimizing product spend, start with feature. If you are working on pricing or margin, start with customer. Most mature teams track both.

Do I need request-level logging?
Not always, but you do need request-level diagnostics for the workflows that drive material spend or routinely spike.

What if one feature serves many customers?
Track both the feature and the customer. One explains internal ownership; the other explains unit economics.

What if my AI workload runs through Bedrock, Vertex AI, or Azure OpenAI?
Use cloud tags, labels, or exports in addition to your application metadata. Managed AI bills still need an application-level explanation layer.

How detailed should I get at first?
Start with provider, model, feature, owner, and customer. That is detailed enough to be useful without becoming fragile.

References

Share this post

Send it to someone managing cloud or AI spend.

LinkedInX

Know where your cloud and AI spend stands — every day, starting today.

Sign up