Who Audits the Auditors? Building an LLM-as-a-Judge for Agentic Reliability

We’ve built a powerful Forensic Team. They can find books, analyze metadata, and spot discrepancies using MCP.

But in the enterprise, ‘it seems to work’ isn’t a metric. If an agent misidentifies a $50,000 first edition, the liability is real.

Today, we move from Subjective Trust to Quantitative Reliability. We are building The Judge—a high-reasoning evaluator that audits our Forensic Team against a ‘Golden Dataset’ of ground-truth facts.

Before you Begin

Prerequisites: You should have an existing agentic workflow (see my MCP Forensic Series) and a high-reasoning model (Claude 3.5 Opus/GPT-4o) to act as the Judge.

1. The “Golden Dataset”

Before we can grade the agents, we need an Answer Key. We’re creating tests/golden_dataset.json. This file contains the “Ground Truth”—scenarios where we know there are errors.

Example Entry:

{
"test_id": "TC-001",
"input": "The Great Gatsby, 1925",
"expected_finding": "Page count mismatch: Observed 218, Standard 210",
"severity": "high"
}

Director’s Note: In an enterprise setting, “Reliability” is the precursor to “Permission”. You will not get the budget to scale agents until you can prove they won’t hallucinate $50k errors. This framework provides the data you need for that internal sell.

2. The Judge’s Rubric

A good Judge needs a rubric. We aren’t just looking for “Yes/No.” We want to grade on:

  • Precision: Did it find only the real errors?
  • Recall: Did it find all the real errors?
  • Reasoning: Did it explain why it flagged the record?

3. Refactoring for Resilience

Before building the Judge, we had to address a common “Senior-level” trap: hardcoding agent logic. Based on architectural reviews, we moved our system prompts from the Python client into a dedicated config/prompts.yaml.

This isn’t just about clean code; it’s about Observability. By decoupling the “Instructions” from the “Execution,” we can now A/B test different prompt versions against the Judge to see which one yields the highest accuracy for specific models.

4. The Implementation: The Evaluation Loop

We’ve added evaluator.py to the repo. It doesn’t just run the agents; it monitors their “vital signs.”

  • Error Transparency: We replaced “swallowed” exceptions with structured logging. If a provider fails, the system logs the incident for diagnosis instead of failing silently.
  • The Handshake: The loop runs the Forensic Team, collects their logs, and submits the whole package to a high-reasoning Judge Agent.

The Evaluator-Optimizer Blueprint

This diagram represents our move from “Does the code run?” to Does the intelligence meet the quality bar?” This closed-loop system is required before we can start the fiscal optimization of choosing smaller models to handle simpler tasks.

Architectural diagram of an AI Evaluator-Optimizer loop. It shows a Golden Dataset feeding into an Agent Execution layer, which then passes outputs and logs to a Judge Agent for scoring against a rubric. The final Reliability Report provides a feedback loop for prompt tuning and iterative improvement.
The Evaluator-Optimizer Loop-Moving from manual vibe-checks to automated, quantitative reliability scoring.

Director-Level Insight: The “Accuracy vs. Cost” Curve

As a Director, I don’t just care about “cost per token.” I care about Defensibility. If a forensic audit is challenged, I need to show a historical accuracy rating. By implementing this Evaluator, we move from “Vibe-checking” to a Quantitative Reliability Score. This allows us to set a “Minimum Quality Bar” for deployment. If a model update or a prompt change drops our accuracy by 2%, the Judge blocks the deployment.

The Production-Grade AI Series

  • Post 1: The Judge Agent — You are here
  • Post 2: The Accountant (Cognitive Budgeting & Model Routing) — Coming Soon
  • Post 3: The Guardian (Human-in-the-Loop Handshakes) — Coming Soon

Looking for the foundation? Check out my previous series: The Zero-Glue AI Mesh with MCP.

Facebooktwitterredditlinkedinmail

The Forensic Team: Architecting Multi-Agent Handoffs with MCP

Why One LLM Isn’t Enough—And How to Build a Specialized Agentic Workforce

In my last post, we explored the “Zero-Glue” architecture of the Model Context Protocol (MCP). We established that standardizing how AI “talks” to data via an MCP Server is the “USB-C moment” for AI infrastructure.

But once you have the pipes, how do you build the engine?

In 2026, the answer is no longer “one giant system prompt.” Instead, it’s Functional Specialization. Today, we’re building a Multi-Agent Forensic Team: a group of specialized Python agents that use our TypeScript MCP Server to perform deep-dive archival audits.

The “Context Fatigue” Problem

Early agent architectures relied on a single LLM handling everything:

  • retrieve data
  • reason about it
  • run tools
  • write the final output

Even with large context windows, this approach quickly hits a reasoning ceiling.

A single agent juggling too many tools often suffers from:

  1. Tool Confusion
    Choosing the wrong function when multiple tools are available.
  2. Logic Drift
    Losing track of the objective during multi-step reasoning.
  3. Latency and Cost
    Sequential reasoning loops increase response time and token usage.

The solution is functional specialization.

Instead of one overloaded agent, we build a team of focused agents coordinated by a supervisor.

Before diving into the multi-agent design, it helps to understand where the agents live in the MCP stack.

Figure 1. The MCP architecture stack: agents reason about tasks while MCP standardizes access to tools, resources, and enterprise data.

Layered architecture diagram of an MCP-based AI system showing applications, agent orchestration, the Model Context Protocol layer, tools and resources, and underlying data systems.
The MCP architecture stack: agents reason about tasks while MCP standardizes access to tools, resources, and enterprise data.

The Architecture: A Polyglot Powerhouse

One of MCP’s strengths is that it decouples tools from orchestration.

This allows each layer of the system to use the language best suited for the job.

In our case:

  • The “Hands” (TypeScript)
    Our MCP server handles data access and tool execution with strong typing.
  • The “Brain” (Python)
    A Python orchestrator manages reasoning and agent coordination using frameworks like LangGraph or PydanticAI.

Because both layers communicate through MCP, the language boundary disappears.

Multi-Agent MCP Architecture

Diagram showing a multi-agent architecture using the Model Context Protocol (MCP) with a Python supervisor agent coordinating Librarian and Analyst agents that access tools through a TypeScript MCP server connected to an archive database.
Multi-agent MCP architecture: a Python supervisor coordinates specialized agents that access tools through a shared MCP server.

Each agent communicates with tools through the MCP server, not directly with the data source.

The Forensic Team Roles:

Role Agent Identity Primary Responsibility MCP Tools Used
Supervisor The Orchestrator Receives request, manages state, and handles handoffs. list_tools, list_resources
Librarian The Researcher Gathers historical facts and archival metadata find_book_in_master_bibliography
Analyst The Forensic Tech Compares observed data against metadata to find flaws audit_artifact_consistency

How It Works: Glue-Free Agent Handoffs

The beauty of MCP is the Transport Layer. Our Python client connects to the TypeScript server via stdio. It doesn’t care that the server is written in Node.js; it only cares about the protocol.

  1. Spawning the Sub-process
    In our orchestrator.py, we define how to “wake up” the TypeScript server. Notice how we point Python directly at the Node.js build:
def get_server_params() -> StdioServerParameters:
    # This is the bridge: Python spawning a Node.js process
    return StdioServerParameters(
        command="node",
        args=[str(SERVER_ENTRY)], # Points to our TS /build/index.js
        cwd=str(PROJECT_ROOT),
    )
  1. The Functional Handoff
    Because MCP tools expose strict schemas, the agents can pass structured results between each other without custom translation layers.

The Supervisor doesn’t manually parse JSON or remap fields.

Instead it simply chains the outputs:

# 1. Librarian: pull book details
librarian_result = await librarian_agent(session, title, author)

# 2. Analyst: audit for discrepancies (using Librarian's data)
analyst_result = await analyst_agent(
    session, book_page_id, book_standard, observed
)

Why This Wins in the Enterprise:

Auditability

You can track exactly what each agent saw and what conclusions it produced.

Security

Agent permissions can be scoped by tool access.
The Librarian may only read archives, while the Analyst writes forensic reports.

Maintainability

Each agent owns a single responsibility.
If the forensic logic changes, only the Analyst agent needs to be updated.

Scaling to the “AI Mesh”

By using MCP as the backbone, you’ve built more than an app; you’ve built a System of Intelligence. Any new tool you add to your TypeScript server is instantly “discoverable” by your Python team. You are no longer writing “Glue Code”; you are orchestrating a digital workforce.

The MCP server becomes the shared capability layer for your entire AI system.

📚 The “Zero-Glue” Series
– Post 1: The End of Glue Code: Why MCP is the USB-C Moment for AI
– Post 2: The Forensic Team: Architecting Multi-Agent Handoffs – You are here
– Post 3: From Cloud to Laptop: Running MCP Agents with SLMs – Coming Soon
– Post 4: Enterprise Governance: Scaling MCP with Oracle 26ai – Coming Soon

Explore the Code:

The full multi-agent orchestrator is now live in the /examples folder of the repo:
👉 MCP Forensic Analyzer – Multi-Agent Example

Up Next in the Series:

Next week, we go small. We’re moving the “Forensic Team” out of the cloud and onto your laptop. We’ll explore Edge AI and how to run this entire stack using Small Language Models (SLMs) like Phi-4—no $10,000 GPU required.

Facebooktwitterredditlinkedinmail