The Accountant: Optimizing AI Costs with Semantic Routing

We’ve solved the Reliability problem with The Judge. We have a system that can scientifically prove whether our Forensic Team is accurate. But there’s a new problem that keeps Directors and CFOs up at night: Sustainability.

In an enterprise environment, using a massive, high-reasoning model (like Claude 3.5 or GPT-4o) for every single bibliography lookup is a “Cognitive Budget” disaster. It’s like hiring a Senior Architect to fix a broken link.

Today, we introduce The Accountant: A Semantic Router that classifies task complexity and routes requests to the cheapest model capable of passing the Judge’s rubric.

1. The Concept of “Tiered Intelligence”

Not all forensic tasks require the same level of “gray matter.” To scale effectively, we must categorize our workload:

  • LEVEL 1 (Operational): “Find the standard page count for the 1925 edition of Gatsby.” This is a lookup and retrieval task. Local SLMs (Small Language Models) like Phi-4 or Llama 3.2 excel here.
  • LEVEL 2 (Forensic): “Compare the binding grain and typography inconsistencies between two suspected forgeries.” This requires high-dimensional analysis and deep reasoning. This is a job for the Cloud.
Architectural diagram of a Semantic Router called The Accountant. A user request enters the router, which classifies it into Level 1 (Simple/Metadata) or Level 2 (Complex Forensic). Level 1 is routed to a local Tier 1 SLM like Phi-4 or Llama 3.2, while Level 2 is routed to a Tier 2 Frontier Cloud model like Claude 3.5. Both paths converge to produce a final Audit Report.
The Semantic Router Architecture—Implementing Tiered Intelligence to optimize cognitive budget and reduce inference costs.

2. Implementing the Router (The Gatekeeper Pattern)

We’ve added router.py to our repository. The logic acts as a gatekeeper.
1. Classification: A lightweight model (the Accountant) reviews the user’s query against our config/prompts.yaml.
2. Economic Decision: If the query is “Level 1”, we trigger the ollama provider. If it’s “Level 2,” we escalate to the anthropic provider.

# The Accountant's Decision Engine in router.py
level = await classify_query(query)
provider = get_provider_for_level(level)

if level == "LEVEL_1":
    print("Accountant Decision: LEVEL_1 - Routing to Local SLM to save budget")
else:
    print("Accountant Decision: LEVEL_2 - Routing to High-Reasoning Cloud Model")

By defaulting to LEVEL_2 if classification fails, we ensure that we never sacrifice accuracy for cost – we only save money when we are certain the tasks are simple.

3. Projecting the ROI with The Judge

While we built the Accountant (the router), we haven’t yet run a full-scale economic audit in this repository. However, the architecture is designed to scientifically measure this trade-off using the Judge Agent (from our last post).

In an enterprise environment, a Director would use this framework to benchmark a representative sample of historical queries. A typical analysis for tiered intelligence systems shows that the vast majority of “forensic” requests are actually simple metadata lookups. By routing those to a local SLM (Phi-4 or Llama 3.2), we can achieve comparable reliability scores to a frontier cloud model while zeroing out the marginal cost of those specific tokens.

The Theoretical Savings (100k Calls/Month):

  • Current Cost (Frontier Cloud for 100% of tasks): ~$7,600/month
  • Projected Cost (90/10 Routed Split): ~$1,800/month
  • Total Savings: ~76% reduction in inference costs.
Task Category Estimated Volume “Status Quo” Cost (Frontier Cloud) “Routed” Cost (Accountant/SLM)
Level 1 (Standard Lookup/Formatting) 90% (90k calls) ~$4,500 ~$0 (Local/Self-Hosted)
Level 2 (Deep Forensic Analysis) 10% (10k calls) ~$3,100 ~$1,800*
Total Cognitive Budget 100% ~$7,600 ~$1,800

* Note: Level 2 “Routed” costs are lower here because the Accountant ensures only the most complex 10% of tokens hit the high-cost provider, whereas the “Status Quo” assumes a higher average cost across all 100k calls due to the lack of optimization.

Cognitive Budgeting Insights

As a Director, the responsibility is to build Sustainable Intelligence. If 80% of an AI workload can be moved to local infrastructure or cheaper “Flash” models without dropping our reliability score, I’m not just a developer—I’m a profit center. Semantic routing allows us to scale AI horizontally without the cloud bill scaling vertically.

🛠️ Step into the Clean-Room

The Accountant logic is now live in the repository. You can test the routing logic yourself by running the local orchestrator with the --use-accountant flag.

Explore the Code: MCP Forensic Analyzer on GitHub

(If this architecture helps your team justify their AI spend, consider dropping a ⭐ on the repo!)

The Production-Grade AI Series

  • Post 1: The Judge Agent: Who Audits the Auditors? (Reliability)
  • Post 2: The Accountant: Optimizing AI Costs with Semantic Routing (Sustainability) – You’re Here
  • Post 3: The Guardian: Human-in-the-Loop Governance (Safety) – Coming Soon

Looking for the foundation? Check out my previous series: The Zero-Glue AI Mesh with MCP.

Facebooktwitterredditlinkedinmail

Who Audits the Auditors? Building an LLM-as-a-Judge for Agentic Reliability

We’ve built a powerful Forensic Team. They can find books, analyze metadata, and spot discrepancies using MCP.

But in the enterprise, ‘it seems to work’ isn’t a metric. If an agent misidentifies a $50,000 first edition, the liability is real.

Today, we move from Subjective Trust to Quantitative Reliability. We are building The Judge—a high-reasoning evaluator that audits our Forensic Team against a ‘Golden Dataset’ of ground-truth facts.

Before you Begin

Prerequisites: You should have an existing agentic workflow (see my MCP Forensic Series) and a high-reasoning model (Claude 3.5 Opus/GPT-4o) to act as the Judge.

1. The “Golden Dataset”

Before we can grade the agents, we need an Answer Key. We’re creating tests/golden_dataset.json. This file contains the “Ground Truth”—scenarios where we know there are errors.

Example Entry:

{
"test_id": "TC-001",
"input": "The Great Gatsby, 1925",
"expected_finding": "Page count mismatch: Observed 218, Standard 210",
"severity": "high"
}

Director’s Note: In an enterprise setting, “Reliability” is the precursor to “Permission”. You will not get the budget to scale agents until you can prove they won’t hallucinate $50k errors. This framework provides the data you need for that internal sell.

2. The Judge’s Rubric

A good Judge needs a rubric. We aren’t just looking for “Yes/No.” We want to grade on:

  • Precision: Did it find only the real errors?
  • Recall: Did it find all the real errors?
  • Reasoning: Did it explain why it flagged the record?

3. Refactoring for Resilience

Before building the Judge, we had to address a common “Senior-level” trap: hardcoding agent logic. Based on architectural reviews, we moved our system prompts from the Python client into a dedicated config/prompts.yaml.

This isn’t just about clean code; it’s about Observability. By decoupling the “Instructions” from the “Execution,” we can now A/B test different prompt versions against the Judge to see which one yields the highest accuracy for specific models.

4. The Implementation: The Evaluation Loop

We’ve added evaluator.py to the repo. It doesn’t just run the agents; it monitors their “vital signs.”

  • Error Transparency: We replaced “swallowed” exceptions with structured logging. If a provider fails, the system logs the incident for diagnosis instead of failing silently.
  • The Handshake: The loop runs the Forensic Team, collects their logs, and submits the whole package to a high-reasoning Judge Agent.

The Evaluator-Optimizer Blueprint

This diagram represents our move from “Does the code run?” to Does the intelligence meet the quality bar?” This closed-loop system is required before we can start the fiscal optimization of choosing smaller models to handle simpler tasks.

Architectural diagram of an AI Evaluator-Optimizer loop. It shows a Golden Dataset feeding into an Agent Execution layer, which then passes outputs and logs to a Judge Agent for scoring against a rubric. The final Reliability Report provides a feedback loop for prompt tuning and iterative improvement.
The Evaluator-Optimizer Loop-Moving from manual vibe-checks to automated, quantitative reliability scoring.

Director-Level Insight: The “Accuracy vs. Cost” Curve

As a Director, I don’t just care about “cost per token.” I care about Defensibility. If a forensic audit is challenged, I need to show a historical accuracy rating. By implementing this Evaluator, we move from “Vibe-checking” to a Quantitative Reliability Score. This allows us to set a “Minimum Quality Bar” for deployment. If a model update or a prompt change drops our accuracy by 2%, the Judge blocks the deployment.

The Production-Grade AI Series

  • Post 1: The Judge Agent — You are here
  • Post 2: The Accountant (Cognitive Budgeting & Model Routing) — Coming Soon
  • Post 3: The Guardian (Human-in-the-Loop Handshakes) — Coming Soon

Looking for the foundation? Check out my previous series: The Zero-Glue AI Mesh with MCP.

Facebooktwitterredditlinkedinmail