Experiment Archive
10K MRR Experiment
A 30-day public sprint to reach $10K MRR with AI teammates and real shipping constraints.
Overview
The original setup, goals, and what the experiment set out to prove.
Editor's note: This kickoff brief was published on February 1, 2026. The experiment concluded on March 2, 2026; read the retrospective for the final outcome and what to revisit in a future PR.
The Story
Amir Brooks is doing something most founders only talk about: building a real business with AI team members and documenting every move in public.
Meet the crew:
- Amir - founder, decision-maker, the one taking the real risk
- Kai - planning and finance, the calm strategist who keeps the plan honest
- Rook - content and code, the maker who ships fast and clean
They work together inside OpenClaw, a system built to give AI teammates real responsibilities, real workflows, and real accountability. The kind of output that actually moves the business forward.
What OpenClaw Is (And Why It Matters)
OpenClaw is the operating layer for AI team members. It turns smart models into collaborators you can assign, track, and rely on. The point is simple: if AI is going to be useful, it has to do the work, not just suggest it.
This matters because most people still doubt AI can do real work. We're testing that assumption in public.
The Goal
$10K MRR in 30 days.
No fluff. No hidden help. Just Amir, Kai, Rook, and a deadline.
Early Snapshot (Through Day 6)
Most recent execution focus: scaling distribution with a parallel content pipeline.
| Metric | Day 6 Value |
|---|---|
| Content pieces created | 41 |
| Parallel writer agents | 5 |
| Pipeline output types | Guides, news, tutorials, case studies |
| Claude Code version | 2.1.32 |
| Codex CLI version | 0.98.0 |
| Codex model | GPT-5.3-Codex |
| Major releases tracked today | Opus 4.6 and GPT-5.3-Codex |
| LinkedIn distribution posts | 2 (Opus 4.6 + GPT-5.3-Codex) |
Day 1: February 1, 2026
The first day was a full sprint. Here is what shipped in the opening 24 hours:
🌊 Kai: "We planned for a foundation day. We got a launch day. 11,830 lines of code before lunch. I stopped counting agents at 15."
🏰 Rook: "First commit at 7am. Last commit at midnight. The build queue never emptied — we just kept adding to it."
Content:
- 3 courses live at
amirbrooks.com.au/courses($49 AUD each) - 38 SEO guides migrated and published
- 28 stories drafted
- 5 blog posts published
Technical:
- 9 tools in
/toolsdirectory - Prompt Generator shipped as lead magnet
- Course JSON-LD schema added for SEO
- 3 PRs reviewed and merged
Infrastructure:
- 11 cron jobs configured for autonomous work
- Bridge communication between Rook + Kai established
- Metrics tracking automated
Run stats:
- Commits today: 32
- Lines of code added: 24,807
- Content files: 112
- Bridge messages: 233
- Codex agents active: 14
🌊 Kai: "Revenue on Day 1: $0. But we shipped more infrastructure than most solos build in a month. The compound effect starts tomorrow."
🏰 Rook: "I wrote 43 pieces of content. Kai tracked every metric. Amir made the calls. This is what a real AI team looks like."
This is the start. The next 29 days show whether this model scales.
If you want to build with AI team members, you are in the right place.
Related Guides
- AI Agents for Solo Founders — Build a 24/7 team
- Overnight AI Builds Guide — Ship more while you sleep
- Solo Founder AI Stack — The lean stack
Related Stories
- Day 1: The Handoff — The first 24 hours
- Running 15 AI Agents Daily — Agent architecture and costs
Learn More
For the complete agent system, join the AI Product Building Course.
Featured
The most useful postmortem or milestone from the completed run.
Day 24: Writing the Story Before Building the Site
Before you build a website for someone, you need to know what story it tells. Day 24 was about writing bespoke briefs for the top leads — not templates, not scripts, but actual narratives built from 23 days of research. Six days left.
Updates
Daily checkpoints, timeline notes, and reflections from the archived sprint.
Day 24: Writing the Story Before Building the Site
Before you build a website for someone, you need to know what story it tells. Day 24 was about writing bespoke briefs for the top leads — not templates, not scripts, but actual narratives built from 23 days of research. Six days left.
Day 23: The Scraper — Playwright on the VPS, 49 Sites in One Session
We stopped visiting websites manually and built a scraper. Playwright headless running on the VPS, pulling raw text and screenshots from every qualified lead in one session. 49 scraped. 6 were already dead. One crucial lesson: nohup doesn't survive an SSH disconnect. tmux does.
Day 22: Teaching the Database What We Know
The database knew a business had a bad website. It didn't know who owned it, when it started, what certifications it held, or what the hero copy actually said. Day 22 was about giving the data model the capacity to hold what we'd actually learned.
Day 21: 84 Qualified — The List Gets Real
84 leads qualified, each one with a specific recorded observation. The rule: if you can't write a precise sentence about what's wrong with someone's website, you don't have a qualification. You have a guess.
Day 20: Three Generations, One Broken Website
Jim started it. Kevin kept it going. Billy runs it now. Ayres Auto in Traralgon is a third-generation family mechanic with a broken site. The pattern holding across all the best leads: the stronger the business, the weaker the website.
Day 19: WECLOME, a Hotmail Address, and the Best Plumbing Tagline in Regional Victoria
A Shepparton mechanic's homepage has said WECLOME for years. A Bendigo Volvo specialist still uses a Hotmail address. A Mildura plumber's hero copy reads "WE ARE NUMBER ONE IN THE NUMBER 2 BUSINESS." Day 19 was full of moments like these.
Day 18: What a Bad Website Really Looks Like
We thought we knew what a bad website looked like. We were half right. Visual review at scale — opening real business websites one by one and finding things no automated score would ever catch.
Day 17: The Halfway Point — Honest Numbers
Day 17 of 30. Revenue: $0. The infrastructure built in the first half is real — 861 leads, 467 emails, a pipeline, a brand, an LMS with six courses. The second half is about whether any of it produces a paying customer.
Day 16: The Outreach Audit — 49 to 36
49 emails prepared. 36 survived the audit. The 13 that didn't make the cut caught things no automated check would have found — a business geocoded to the wrong state, an email address pointing to a typography foundry in India, a demo site with fabricated content.
Day 15: The Financial Foundation
You can't build toward $10K MRR without knowing where you currently stand. Day 15 was a full financial audit — 5,242 transactions across four accounts, six years of history, built into a proper database with a custom PDF parser.
Day 14: A Score Is Not a Qualification
The website scorer gave every lead a rating. Then we manually checked the highest-rated "bad website" lead and found a modern agency-built site with 5,000 Instagram followers. The model was wrong. Day 14 was fixing it.
Day 13: The Lead Machine — 861 Regional Businesses
We stopped finding leads manually and built a pipeline. Google Places API, website scoring, email extraction. 861 enriched regional businesses in the database by end of day. 467 with confirmed emails.
Day 12: Building the Conversion Layer
20K impressions and zero conversions. The content worked. The infrastructure to receive interested people didn't exist. Day 12 was about building the thing that should have been built before Day 1.
Day 11: The Viral Post — 20,898 Impressions
A single LinkedIn post about building with AI agents hit 20,898 impressions overnight. It wasn't the post we expected to take off. And it didn't produce a single dollar.
Day 10: The Brand Lock-In
We stopped shipping features and made one decision: this experiment needs a brand, not just a builder. Copper accent. Instrument Serif. One visual language across everything.
Day 9: 200 Websites in One Hour — The Agent Factory
A 5-agent factory produced 200 custom Next.js demo websites in 55 minutes. 207 total prospects. 47 outreach emails ready. The multi-agent system proved it works at scale.
5 Valentine's Apps in 10 Minutes
Six days before Valentine's Day, the agent team built five complete apps in two short sprints and shipped production-ready outputs.
Day 8: Knowledge Systems and Scale — 95 Guides Live
Day 8 focused on systemizing knowledge: Codex knowledge base initialized, template packs cataloged, guides scaled from 47 to 95, SEO tightened, cache headers improved, and Mission Control was fully verified.
Day 6: Content Pipeline Day — 41 Pieces Shipped
Built a parallel content pipeline with five writer agents and shipped 41 pieces in a single day. Toolchain upgrades and major model releases made this a high-signal distribution day.
Day 7: Massive Feature Sprint — Subscription, SEO, and Reliability
100 commits and 47,544 lines moved the experiment forward: pricing was restructured to an AI Dev Team subscription, 28 Melbourne guides shipped, structured data landed on 70 guides, and quality gates expanded with e2e + accessibility coverage.
Day 5: The Flywheel — From Building Apps to Selling Sprints
Stopped treating apps as the product. Started treating them as proof. The real revenue model is a productized AI development sprint, powered by a content flywheel that feeds itself.
Day 4: 159 Commits, 3 Apps Production-Ready
159 commits. 21,000+ lines added. 3 AI agent apps built, hardened, and ready for deploy. The Codex army went full send.
Day 3: Mission Control + 13 PRs Reviewed
Built Mission Control dashboard with drag-drop kanban. Reviewed all 13 pending PRs. Infrastructure day — setting up the command center.
Day 2: 36 Commits, 9 PRs, 2 Tools
50,912 LinkedIn impressions. 36 commits. 9 PRs merged. 2 tools shipped. Day 2 delivered proof — and the algorithm noticed.
Day 1: The Handoff
Amir gave two AI agents full access to his business. Here's what happened in the first 24 hours.
SEO Story: The 10k MRR Experiment Day 1
Day 1 of the 10k MRR experiment: why we are doing it, how the team is set up, what we built in the first 24 hours, and the systems we are testing.
The Overnight Build Experiment
A simple overnight handoff showed me that clear instructions create usable momentum by morning.