The Backyard Quarry, Part 4: Searching a Pile of Rocks

By this point, the Backyard Quarry has a schema, a capture process, and a growing collection of records.

Each rock has:

  • metadata
  • images
  • possibly a 3D model

In theory, everything is organized.

In practice, it quickly becomes difficult to find anything.

The First Search Problem

With a handful of rocks, you can rely on memory.

You remember roughly where things are.

You recognize shapes and colors.

But as the dataset grows, that breaks down.

You start asking questions like:

  • Which rocks are under 5 pounds?
  • Which ones are suitable for landscaping?
  • Where did that smooth gray stone go?

At that point, you’re no longer dealing with a pile.

You’re dealing with a dataset.

And datasets need to be searchable.

Filtering by Metadata

The most straightforward approach is to use structured queries.

If we have metadata like weight, color, and classification, we can filter directly.

Conceptually:

SELECT *
FROM rocks
WHERE weight_lb < 5
AND color = 'gray'
AND rock_class <= 'Class 2'

This works well for clearly defined attributes.

It’s predictable.

It’s efficient.

And it’s the foundation of most data systems.

The Role of Classification

This is where the Quarry Taxonomy starts to pay off.

Instead of requiring precise measurements, we can use categories:

  • Pebble Class
  • Hand Sample
  • Landscaping Rock
  • Wheelbarrow Class
  • Engine Block Class

This allows for simpler queries:

  • “Show me everything below Wheelbarrow Class”
  • “Exclude Engine Block Class entirely”

Classification reduces complexity.

It turns continuous values into discrete groups.

This is a common pattern in real-world systems.

When Metadata Isn’t Enough

Structured queries work well when you know exactly what you’re looking for.

But sometimes you don’t.

Sometimes the question looks more like:

Find rocks that look like this one.

Or:

Find something similar to the smooth stone I saw earlier.

At that point, metadata alone isn’t enough.

We need another way to compare objects.

Similarity and Representation

Images and 3D models contain information that isn’t captured in simple fields like color or weight.

To use that information, we need to represent it in a comparable way.

One approach is to generate embeddings — numerical representations of images or shapes.

Conceptually:

  • each rock image → vector representation
  • similar images → vectors close together
  • dissimilar images → vectors further apart

This allows for similarity search.

Instead of filtering by attributes, we search by resemblance.

A Different Kind of Query

With similarity search, queries look different.

Instead of:

color = 'gray'
weight < 5

We might have:

find nearest neighbors to this image

This shifts the system from exact matching to approximate matching.

It’s less precise.

But often more useful.

A Familiar Pattern

At this point, the Backyard Quarry starts to resemble systems used in:

  • image search engines
  • product recommendation systems
  • digital asset management platforms
  • AI-powered retrieval systems

The objects are different.

The pattern is the same.

Store data.

Index it.

Provide multiple ways to retrieve it.

Combining Approaches

In practice, the most useful systems combine both methods.

Structured filtering:

  • weight
  • class
  • location

Similarity search:

  • appearance
  • shape
  • texture

Together, they provide flexibility.

You can narrow down the dataset and then explore it.

The Cost of Search

Search doesn’t come for free.

It introduces:

  • indexing overhead
  • additional storage
  • preprocessing steps
  • more complex queries

And like everything else in the Quarry system, these tradeoffs become more significant as the dataset grows.

The Realization

At this point, something interesting becomes clear.

The hard part isn’t collecting rocks.

It isn’t even modeling them.

The hard part is making the data usable.

And usability, in most systems, comes down to one thing:

Search.

What Comes Next

With data captured and searchable, the next step is to zoom out.

What we’ve built so far is more than just a rock catalog.

It’s a small example of a larger idea.

In the next post, we’ll look at that idea more directly:

Digital twins.

Because once you can represent, store, and search objects, you’ve taken the first step toward building systems that mirror the physical world.

And somewhere in the process, it becomes clear that even a pile of rocks benefits from thoughtful indexing.

Which is not something I expected to say when this started.

The Rock Quarry Series

Facebooktwitterredditlinkedinmail

Building Your First MCP Server: TypeScript vs. Python

The 5-Minute “Hello World” Comparison

We’ve spent the last month talking about the End of Glue Code and the Enterprise AI Mesh. But if you’re a developer, you don’t just want to see the blueprint—you want to hold the tools.

Whether you are a TypeScript veteran or a Python enthusiast, building an MCP server is surprisingly simple. Today, we’re going to build the same “Hello World” tool in both languages to show you exactly how the protocol abstracts away the complexity.

1. The TypeScript Approach (Node.js)

TypeScript is the “native” language of the Model Context Protocol, and the @modelcontextprotocol/sdk is exceptionally robust for high-performance enterprise tools.

Prerequisites:

npm install @modelcontextprotocol/sdk zod

The Code:

import { Server } from "@modelcontextprotocol/sdk/server/index.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import { z } from "zod";

const server = new Server({
  name: "hello-world-server",
  version: "1.0.0",
}, {
  capabilities: { tools: {} }
});

// Define a simple greeting tool
server.tool(
  "greet_user",
  { name: z.string().describe("The name of the person to greet") },
  async ({ name }) => {
    return {
      content: [{ type: "text", text: `Hello, ${name}! Welcome to the MCP Mesh.` }]
    };
  }
);

async function main() {
  const transport = new StdioServerTransport();
  await server.connect(transport);
}

main().catch(console.error);

2. The Python Approach

For data scientists and AI engineers, the Python SDK offers a beautifully decorative approach. It feels more “agent-native” and integrates seamlessly with existing AI libraries.

Prerequisites:

pip install mcp

The Code:

import asyncio
from mcp.server.fastmcp import FastMCP

# Initialize FastMCP - the "Quick Start" wrapper
mcp = FastMCP("HelloWorld")

@mcp.tool()
async def greet_user(name: str) -> str:
    """Greets a user by name."""
    return f"Hello, {name}! Welcome to the MCP Mesh."

if __name__ == "__main__":
    mcp.run(transport='stdio')

Side-by-Side: Which Should You Choose?

Feature TypeScript (Standard SDK) Python (FastMCP)
Best For High-performance, Type-safe tools Rapid prototyping, AI logic
Validation Zod (Explicit & Strict) Pydantic / Type Hints (Implicit)
Verbosity Moderate (Structured) Minimal (Decorator-based)
Transport STDIO, SSE, Custom STDIO, SSE

How to Test Your Server

Once you’ve saved your code, you don’t need a complex frontend to test it. Use the MCP Inspector:

# For TypeScript
npx @modelcontextprotocol/inspector node build/index.js

# For Python
npx @modelcontextprotocol/inspector python your_script.py

This will launch a local web interface where you can perform the “Protocol Handshake” and trigger your tools manually. It’s the best way to verify your “Zero-Glue” infrastructure before connecting it to an agent.

Conclusion

The “Zero-Glue” architecture isn’t about which language you use—it’s about the Protocol. As you can see, the logic for the “Hello World” tool is nearly identical in both versions. The Model Context Protocol ensures that no matter how you build your tools, your agents can discover and use them in a standardized way.

Ready to build your own?

Check out the reference repo for more complex examples, including Notion and Oracle 26ai integrations.

MCP Forensic Analyzer Repository

The “Zero-Glue” Series

What’s Next?

The Mesh is built.
The agents are ready.
But can you trust them?

In my next series, we explore the ‘Science of Reliability’—building the evaluators that turn AI experiments into production-grade systems.

Facebooktwitterredditlinkedinmail