4 min read

I asked Claude to build analytics from scratch. It cost 47,000 tokens. The alternative cost 2 lines.

A real comparison: 47,000 tokens and three hours of back-and-forth versus 700 tokens and four minutes. The MCP server that makes the difference.

I was building a SaaS dashboard and asked Claude to add analytics. Here is what happened.

The Waste

It started innocently. I had a FastAPI backend, a halfway-decent frontend, and I wanted to know which features users were actually clicking. Standard stuff. So I typed something like "add analytics tracking to my app" and hit enter.

Claude got to work.

First prompt: database schema. Tables for events, sessions, page views, user properties. Foreign keys. Indexes. Migration scripts. About 15,000 tokens of perfectly reasonable SQLite DDL that I would need to maintain forever.

Second prompt: the event tracking layer. Middleware to capture requests, a utility function to fire custom events, a queue so I was not hammering the DB on every click. Another 18,000 tokens. Still not wired up to the frontend.

Third prompt: the dashboard. Charts. A little bar graph for daily active users. A funnel. Retention cohorts, because I got ambitious. Vanilla JS canvas rendering because I was not about to pull in Chart.js for a side project. This one took two follow-up messages to fix a timezone bug and another to make it actually render on mobile.

Token count at this point: somewhere around 47,000. Three hours of back-and-forth. And I had something that technically worked but that I would never trust in production. No data retention policy. No GDPR story. No way to actually understand the data without staring at raw numbers in a table I had built myself.

I had reinvented a wheel that hundreds of people had already reinvented, and done it worse than all of them.

The Discovery

A few weeks later I was setting up a new project and stumbled on IndieStack. The pitch was simple: it is a directory of indie tools — small SaaS products built by solo founders and tiny teams — plus an MCP server that lets your AI assistant search the directory before it starts building something from scratch.

Two commands:

pip install indiestack
claude mcp add indiestack

That is it. Now when I work in Claude, it has access to a curated index of tools: what they do, what they cost, how they are installed, whether the founder is still actively maintaining them.

The Result

Next project. Same situation: I wanted analytics. This time I just asked Claude normally — "I need to add analytics to this app."

Instead of immediately generating schema files, Claude paused and checked IndieStack first. It came back with a suggestion: Simple Analytics. Privacy-first, no cookies, no GDPR headaches. The integration is a single script tag. If I wanted server-side event tracking, that is one more line.

The token count for that entire interaction: around 500 for the suggestion, maybe 200 more when I asked it to show me the integration snippet. Call it 700 tokens total.

I had it running in about four minutes. The dashboard looked better than anything I had built, the data was accurate from day one, and I did not have to think about it again.

The Math

47,000 tokens versus 700 tokens.

That is not a made-up comparison. A full analytics feature — schema, event tracking, dashboard UI, charts, bug fixes — will realistically run you 40,000 to 60,000 tokens with any capable LLM. It might be spread across multiple sessions, but the cost accumulates. On Claude Pro you have a context window and a rate limit. Every thousand tokens you spend regenerating infrastructure that already exists in a better form is a thousand tokens you are not spending on the thing that is actually different about your app.

The 47,000-token version also needs maintenance. The 700-token version does not.

The Broader Point

Analytics is just one example. Auth. Payments. Transactional email. PDF invoices. Uptime monitoring. Background job queues. Every one of these has at least one indie tool — often several — that a single founder spent years building and is actively supporting. The tools are good. They are usually cheaper than the big-brand alternatives. They often have better documentation because the person who wrote the code also wrote the docs.

The problem is that your AI does not know they exist. Without a way to search for them, it defaults to building the thing from scratch. Not because building is better — it is almost never better — but because building is what it knows how to do.

Give it the tool. Let it search first.

pip install indiestack

Thirty seconds to install. The next time you ask your AI to add a feature, it will check what has already been built before it starts writing boilerplate.

I wish I had had it three hours earlier.