Back to topic hub
Guides
March 12, 2026
By Andrew Day

When not to use an LLM: decision guide

The highest-leverage AI architecture choice is often not using an LLM at all. Use this guide to reject bad LLM candidates early.

Share this post

Send it to someone managing cloud or AI spend.

LinkedInX

Use this when a team is excited about adding AI, but the workflow may actually be deterministic, low-value, or better served by search, rules, or traditional ML.

The short answer: do not use an LLM when the task is deterministic, the answer space is fixed, the cost of mistakes is high, or a cheaper, simpler method already works well enough.

What you will get in 8 minutes

  • A practical anti-pattern list for LLM adoption
  • A decision rubric for rejecting weak use cases
  • The most common alternatives that beat LLMs
  • A worksheet for saying “no” earlier and more confidently

Use this when

  • A roadmap item says “add AI” but the real workflow is unclear
  • The team is treating an LLM as a default instead of one tool among many
  • A system has tight latency, cost, or correctness constraints
  • You want to protect margin and engineering focus

The 60-second answer

Do not use an LLM when the job is mostly:

  • deterministic lookup
  • exact calculation
  • threshold checking
  • simple search
  • high-volume classification with good labeled data

Use an LLM when the input is messy, the interpretation is semantic, and the workflow gains enough value to justify the extra cost and operational complexity.

Strong reasons not to use an LLM

1. The task is deterministic

Examples:

  • tax or pricing calculations
  • approval threshold checks
  • policy enforcement with explicit rules

If the answer should always be derived by code, derive it by code.

2. The answer space is tiny and fixed

Examples:

  • choose one menu item from a handful of exact triggers
  • map a well-structured field to a known label

Rules or classic ML may be more stable and cheaper.

3. The error cost is too high

Examples:

  • medical triage
  • legal advice
  • financial approval
  • destructive admin actions

If a wrong answer is expensive and hard to detect, an LLM should not be the final authority.

4. The latency budget is tight

Examples:

  • real-time ranking in a high-throughput path
  • low-latency product interactions

Traditional retrieval, ranking, or rules systems often win here.

5. The value is too low for the operating cost

Even a technically possible LLM feature can be a bad business choice if it adds:

  • review overhead
  • prompt tuning burden
  • higher infra spend
  • more user confusion than actual value

Common alternatives that win

Prefer:

  • search for lookup problems
  • SQL or tools for system-of-record questions
  • rules engines for explicit policy logic
  • traditional ML for stable high-volume labels
  • OCR, ASR, or parsing tools for basic media extraction

The right comparison is not “LLM vs nothing.” It is “LLM vs the cheapest reliable alternative.”

A rejection rubric

Reject or rethink the LLM use case if most answers are yes:

  1. Can the task be defined as explicit rules?
  2. Is the input already structured?
  3. Is the acceptable answer space tiny?
  4. Is the cost of a wrong answer high?
  5. Is there already a cheaper reliable system?

If four or five are yes, you probably do not need an LLM.

Where teams get trapped

  • using LLMs to generate exact values
  • shipping AI because competitors did
  • using prompts where rules would be simpler
  • treating manual review as a permanent patch for a weak use case

How StackSpend helps

Avoided AI usage is an economic win too. Workflow-level visibility helps teams compare where LLM usage is adding value and where simpler alternatives would protect margin better.

What to do next

Continue in Academy

LLM reliability and governance

Build release gates, confidence checks, and operational controls that keep LLM systems useful in production.

Share this post

Send it to someone managing cloud or AI spend.

LinkedInX

Know where your cloud and AI spend stands — every day, starting today.

Sign up