Live build tracking

Like a tracking number,
but for your analysis.

Every analysis you request gets a live tracking page. Watch it move through six autonomous build stages — spec, build, render, verify, deploy, publish — in real time. No black box, no waiting in the dark.

mcpanalytics.ai/track?m=user_136__ecommerce__customers__churn_drivers
user_136__ecommerce__customers__churn_drivers
Predict which telco customers will churn and rank the drivers. Logistic regression on the IBM Telco Customer Churn dataset (5,000 rows).
In Progress
Spec
done
Build
done
Render
done
Verify
running
Deploy
pending
Publish
pending
Live
verifier · 39/100 turns · 4m 12s elapsed
Turn 39: Read · screenshots/section-6.jpg
Requested
12 minutes ago
Started by
MCP Analytics curated
Stages
3/6

Everything happening, on one page

The tracker is the public window into the build pipeline. Anyone with the module name can watch — no login, no install, no API key.

Six-stage timeline

See exactly where the build is. Done stages get a green checkmark, the active stage pulses orange, future stages stay grey. No spinners, no ambiguity.

Live agent activity

Turn count, model name, last action, elapsed time. When the agent calls Edit on analysis.R, you see it. When it runs Rscript to test, you see it.

Auto-refresh, then stop

Polls every 5 seconds while the build is in flight. Stops automatically the moment the deploy stamps green. No wasted requests.

Stale heartbeat warning

If an agent stops sending heartbeats for more than 60 seconds, the live block turns yellow. You'll know if something's wedged before we do.

Shareable links

Every build gets a permalink with the module name in the URL. Drop the link in Slack, paste it in an email, bookmark it. The page loads with the right module already populated.

Mobile-friendly

Works on phones. The timeline collapses to two rows, the search box stacks vertically, the progress card scales to viewport width. Track your build from anywhere.

The six stages, explained

Every analysis goes through the same pipeline. Each stage is run by a specialized agent, each stage stamps its completion timestamp, each stage is independently retryable on failure.

1

Spec

The spec_drafter agent reads your dataset, identifies the columns, and produces a detailed analysis plan — what cards the report will have, which statistical methods to use, what the success criteria are. Output: spec.json + tool_config.json.

~3 min
2

Build

The builder agent writes the actual R analysis code, card-by-card. It runs each function with your data to validate the output before committing it. If a card crashes, the agent fixes it before moving on.

~10 min
3

Render

A non-AI step. The renderer takes the build output and produces report.html + screenshots of every card section using a headless browser. Fast and deterministic — same input, same output.

~30 sec
4

Verify

The verifier agent reads every screenshot and runs a quality checklist — are charts rendered, are labels readable, do numbers make sense, is anything truncated? If anything fails, the fixer kicks off and the build re-runs through render+verify until clean.

~5 min
5

Deploy

14 deterministic steps. Validates the module, builds the Docker image, pushes to the production server, registers the tool, uploads the dataset, syncs the report to the marketing site. Each step either OK or fails — no half-states.

~2 min
6

Publish

The marketing agent generates the free tool page and the technical article around your analysis, complete with screenshots, descriptions, FAQs, and SEO metadata. Your build becomes a public-facing report and a buyer-intent landing page in one shot.

~5 min

Three states, one page

The tracker shows the same UI for every stage of every build. Here's what each state looks like.

In progress
An agent is actively working on the build. The current stage pulses orange, the live block shows real-time turn counts and the most recent action.
user_136__retail__sales__price_elasticity
How much does demand change when price moves? Log-log regression on 12 months of POS data.
In Progress
Spec
Build
Render
Verify
Deploy
Publish
Live
builder · 23/50 · 6m 14s
Turn 23: Bash · Rscript test_phase4.R
Deployed
All six stages complete. The footer flips to "Build complete and live in production" with a one-click link to run the report yourself.
user_136__hr__employees__attrition_drivers
Which employee attributes drive attrition? Logistic regression with feature importance on the IBM HR Analytics dataset.
Deployed
Spec
Build
Render
Verify
Deploy
Publish
✓ Build complete and live in production
Failed (rare)
A stage hit an unrecoverable error after the fixer's retry attempts. The page shows exactly which stage broke. The team is automatically notified.
user_521__finance__transactions__fraud_score
Anomaly detection on transaction amounts and timestamps. Isolation forest + DBSCAN.
Failed
Spec
Build
Render
Verify
Deploy
Publish
Failed at the Verify stage. Team notified.

Why we show the work

Most AI analytics tools are black boxes. You hit a button, you wait, and a result appears with no provenance. We built the tracker because the work — and your right to see it — is the product.

The black-box AI tool

  • Submit a question, see a spinner
  • No idea if it's running or stuck
  • Different answer every time you ask
  • Can't share progress with your team
  • If it fails, you get a generic error
  • No way to verify what was actually computed

MCP Analytics with the live tracker

  • Submit, get a tracking link instantly
  • See exactly which stage is running and what tool the agent just called
  • Same input always produces the same output, with the code attached
  • Share the tracking URL with anyone — no login required
  • Failures show you the exact stage and why
  • Every report includes the R source and citations

Track your next build

Submit an analysis through the chat, the API, or your MCP client — the build email comes with a tracking link. Or paste any module name into the tracker right now.

Open the tracker → Start a build