A solo founder needed to build topical authority fast. We built an AI content engine that produced 50 researched, SEO-optimized articles in 48 hours — each with proper metadata, internal links, and real external citations.
When you're a solo founder launching a portfolio and course platform, content is the engine that drives organic discovery. But writing quality, SEO-aware articles takes time — roughly 2-3 hours each when you factor in keyword research, outlining, drafting, sourcing images, writing meta descriptions, and adding internal links. At that rate, building a library of 50 articles would take 100-150 hours. That's a month of full-time work just on content. I didn't have a month. I had a weekend. The requirements were specific: - Each article needed proper YAML frontmatter compatible with my Next.js site (slug, title, excerpt, date, tags, keywords, image URL) - Articles had to target real search intent — not keyword-stuffed filler - Internal links needed to point to actual pages on my site (courses, guides, other articles) - Every article needed a relevant Unsplash image with proper dimensions - Word count had to hit 1,000-1,200 words minimum — enough depth to be useful, not so long it becomes padding - The content had to sound like me, not a corporate content mill Manual writing wasn't going to cut it. Neither was a single ChatGPT session with "write me 50 articles." I needed a system.
The core insight was treating content generation as a pipeline problem, not a writing problem. Each article goes through discrete stages, and each stage can be handled by a specialised agent with a focused prompt and clear inputs/outputs. Instead of one monolithic prompt trying to do everything, I broke the work into five stages: 1. Topic Research & Keyword Mapping — determine what to write about and what search terms to target 2. Outline Generation — structure each article with H2/H3 headings and key points 3. Draft Writing — produce the actual content with voice and tone guidance 4. SEO & Metadata Layer — generate frontmatter, meta descriptions, keyword tags, and source images 5. Quality Review & Link Integration — validate output, inject internal links, and flag issues Each stage had its own agent configuration. The output of one stage became the input for the next. Think of it like a factory line — each station does one job well. ## Building the Pipeline ### Stage 1: Topic Research I started with a seed list of 15 topic clusters relevant to my site: AI automation, content strategy, prompt engineering, business workflows, solo founder tooling, and a few others. For each cluster, the research agent generated 3-5 specific article topics based on search intent patterns. The agent used a combination of keyword analysis and competitor gap identification. I fed it my existing site structure — what pages existed, what courses were planned, what guides were in progress — so it could suggest topics that would naturally link back to those assets. This stage produced a manifest: a JSON file with 50 entries, each containing a working title, primary keyword, secondary keywords, target word count, and suggested internal link targets. ### Stage 2: Outline Generation The outline agent took each manifest entry and produced a structured skeleton. Every outline followed a consistent pattern: introduction with a hook, 3-5 H2 sections with supporting H3s, a practical takeaway, and a conclusion. Consistency here was deliberate. When you're producing content at scale, structural consistency isn't boring — it's what makes the content scannable and predictable for both readers and search engines. Each outline also included notes on what external sources to reference. I didn't want the writing agent hallucinating statistics, so the outline explicitly flagged where claims needed citation. ### Stage 3: Draft Writing This was the heaviest stage. The writing agent received the outline, the tone guide ("direct, practical, builder-focused — like explaining something to a smart friend"), and constraints on length and formatting. Key prompt engineering decisions: - Voice calibration: I fed the agent three of my existing articles as style examples. This made a measurable difference — early drafts without examples read like generic tech blog content. With examples, the output picked up my tendency toward short sentences, practical framing, and occasional bluntness. - No fluff directive: The prompt explicitly stated "no filler paragraphs, no 'in today's fast-paced world' openings, no rhetorical questions that don't serve the reader." This cut the revision rate significantly. - Markdown only: Output had to be clean markdown with ## for H2 and ### for H3. No HTML, no exotic formatting. This kept the content compatible with my MDX rendering pipeline. Each draft took roughly 30-45 seconds to generate. Across 50 articles, the actual generation time was under an hour. ### Stage 4: SEO & Metadata This agent was purely mechanical. For each article, it generated: - YAML frontmatter: slug (derived from title), excerpt (150 characters max), date, tags (from a controlled vocabulary of 12 tags), and keywords - Image sourcing: queried Unsplash for a relevant image based on the article topic, returned a URL with proper width/height parameters for consistent rendering - Meta description: a standalone search-result-optimised summary, distinct from the excerpt The frontmatter generation was surprisingly tricky to get right. YAML is unforgiving with special characters in titles and excerpts — colons, quotes, and apostrophes all need proper escaping. I burned a couple of hours debugging frontmatter parsing errors before adding a validation step that checked every generated YAML block against a schema before passing it downstream. ### Stage 5: Quality Review & Link Integration The final agent acted as editor and integrator. It performed several checks: - Factual claims: flagged any statistics or claims that weren't sourced, marking them for manual review - Internal link injection: matched article content against my site's page inventory and inserted 3-4 relevant internal links per article, using natural anchor text rather than "click here" patterns - Readability scan: checked for overly long paragraphs, passive voice density, and jargon without explanation - Duplicate content check: compared each article against the others in the batch to catch overlapping content (this happened more than I expected with closely related topics) The internal link matching deserves a specific callout. The agent had access to a registry of every page on my site — courses, guides, articles, tools — with descriptions of what each page covered. When it found a relevant passage in an article, it inserted a contextual link. This turned 50 standalone articles into an interconnected content web, which is exactly what search engines reward. ## Quality Control Let me be honest about what worked and what didn't. What worked well: - Structural consistency was excellent. Every article followed the outline format, had proper headings, and stayed within word count targets. - Frontmatter generation was reliable after I added the YAML validation step. Of 50 articles, 47 had valid frontmatter on the first pass. - Internal link placement was surprisingly good. The agent found natural insertion points about 80% of the time. - Voice matching improved significantly with style examples. Most articles read like something I'd actually write. What needed manual intervention: - Hallucinated citations: despite explicit instructions, about 15% of articles included made-up statistics or referenced studies that didn't exist. Every factual claim had to be manually verified. This is the single biggest limitation of AI content generation and I don't see it going away soon. - Image relevance: Unsplash queries returned passable images most of the time, but maybe 20% were generic or only loosely related to the topic. Swapping images manually took about 30 minutes total. - Nuance in technical content: articles on straightforward topics (tool comparisons, how-to guides) were strong. Articles requiring deeper technical nuance (architecture decisions, trade-off analysis) needed more editorial work. The AI writes confidently about everything, which is dangerous when the topic requires hedging. - Opening paragraphs: despite the anti-fluff directive, about a third of articles had weak openings that needed rewriting. First impressions matter and this is where human editing adds the most value. I spent roughly 6 hours on editorial review across the full batch. That's significant, but it's 6 hours versus the 100+ hours it would have taken to write everything from scratch.
The numbers tell the story: - 50 articles produced in 48 hours (including pipeline development time). Net generation time was under 3 hours. - Average word count: 1,100 words — up from the 800-word average of my manually written posts. - 3-4 internal links per article, creating a dense cross-linking structure across the site. - 37 articles enhanced from initial drafts to publication-ready content with full metadata. - 28 articles fully integrated into the live site within the first week. - Time per article dropped from 2-3 hours to under 5 minutes for generation, plus 7-8 minutes average for editorial review. The content has been live for several weeks now. Early signals from Google Search Console show indexed pages climbing steadily, with impressions starting to appear for long-tail keywords. It's too early for definitive traffic data, but the foundation is built.
I had a site with six blog posts and a launch date that wasn't moving. The site needed topical authority across AI automation, content strategy, and business workflows — and I needed it weeks ago, not weeks from now.
So I built a pipeline. In 48 hours it produced 50 SEO-optimized articles, complete with frontmatter, internal links, curated images, and structured metadata. Here's exactly how it worked, what broke, and what I'd do differently.
When you're a solo founder launching a portfolio and course platform, content is the engine that drives organic discovery. But writing quality, SEO-aware articles takes time — roughly 2-3 hours each when you factor in keyword research, outlining, drafting, sourcing images, writing meta descriptions, and adding internal links.
At that rate, building a library of 50 articles would take 100-150 hours. That's a month of full-time work just on content. I didn't have a month. I had a weekend.
The requirements were specific:
Manual writing wasn't going to cut it. Neither was a single ChatGPT session with "write me 50 articles." I needed a system.
The core insight was treating content generation as a pipeline problem, not a writing problem. Each article goes through discrete stages, and each stage can be handled by a specialised agent with a focused prompt and clear inputs/outputs.
Instead of one monolithic prompt trying to do everything, I broke the work into five stages:
Each stage had its own agent configuration. The output of one stage became the input for the next. Think of it like a factory line — each station does one job well.
I started with a seed list of 15 topic clusters relevant to my site: AI automation, content strategy, prompt engineering, business workflows, solo founder tooling, and a few others. For each cluster, the research agent generated 3-5 specific article topics based on search intent patterns.
The agent used a combination of keyword analysis and competitor gap identification. I fed it my existing site structure — what pages existed, what courses were planned, what guides were in progress — so it could suggest topics that would naturally link back to those assets.
This stage produced a manifest: a JSON file with 50 entries, each containing a working title, primary keyword, secondary keywords, target word count, and suggested internal link targets.
The outline agent took each manifest entry and produced a structured skeleton. Every outline followed a consistent pattern: introduction with a hook, 3-5 H2 sections with supporting H3s, a practical takeaway, and a conclusion.
Consistency here was deliberate. When you're producing content at scale, structural consistency isn't boring — it's what makes the content scannable and predictable for both readers and search engines.
Each outline also included notes on what external sources to reference. I didn't want the writing agent hallucinating statistics, so the outline explicitly flagged where claims needed citation.
This was the heaviest stage. The writing agent received the outline, the tone guide ("direct, practical, builder-focused — like explaining something to a smart friend"), and constraints on length and formatting.
Key prompt engineering decisions:
Each draft took roughly 30-45 seconds to generate. Across 50 articles, the actual generation time was under an hour.
This agent was purely mechanical. For each article, it generated:
The frontmatter generation was surprisingly tricky to get right. YAML is unforgiving with special characters in titles and excerpts — colons, quotes, and apostrophes all need proper escaping. I burned a couple of hours debugging frontmatter parsing errors before adding a validation step that checked every generated YAML block against a schema before passing it downstream.
The final agent acted as editor and integrator. It performed several checks:
The internal link matching deserves a specific callout. The agent had access to a registry of every page on my site — courses, guides, articles, tools — with descriptions of what each page covered. When it found a relevant passage in an article, it inserted a contextual link. This turned 50 standalone articles into an interconnected content web, which is exactly what search engines reward.
Let me be honest about what worked and what didn't.
What worked well:
What needed manual intervention:
I spent roughly 6 hours on editorial review across the full batch. That's significant, but it's 6 hours versus the 100+ hours it would have taken to write everything from scratch.
The numbers tell the story:
The content has been live for several weeks now. Early signals from Google Search Console show indexed pages climbing steadily, with impressions starting to appear for long-tail keywords. It's too early for definitive traffic data, but the foundation is built.
Breaking the pipeline into specialised agents was the right call. Each agent had a focused job, clear inputs, and predictable outputs. When something broke — and things broke regularly during development — I could isolate and fix individual stages without rebuilding everything.
If your content pipeline outputs structured data (frontmatter, JSON, any schema), validate it programmatically before it touches your build system. I wasted hours on silent failures from malformed YAML before adding a validation gate.
This pipeline didn't replace writing. It replaced the mechanical parts of writing — the research compilation, the structural scaffolding, the metadata generation, the link insertion. The creative and editorial work still needs a human. Anyone claiming otherwise is either producing low-quality content or not checking their output.
Three example articles in the prompt made more difference than any amount of verbal instruction about tone and voice. Show, don't tell — it works for AI prompts too.
Manually adding internal links to 50 articles would be tedious enough that most people skip it. Automating it meant every article launched with a proper link structure from day one. This is the kind of thing that compounds — each new article strengthens the connections across the whole site.
The articles themselves are valuable, but the pipeline is the real asset. Next time I need a batch of content — for a new topic cluster, a course launch, or a seasonal push — the system is ready. The marginal cost of the next 50 articles is close to zero.
This project started as a weekend hack to solve an immediate problem. It turned into a repeatable system that fundamentally changed how I think about content production. Not because AI writes better than humans — it doesn't — but because it handles the 70% of content work that's mechanical, freeing up human attention for the 30% that actually matters.
A one-day build sprint that produced three production-ready AI apps with 159 commits, parallel agent execution, and strict scope control.
A transparent cost breakdown of running 14+ AI agents-API spend, compute, hosting, and time-plus how I keep it sustainable.
We build revenue-moving AI tools in focused agentic development cycles. 3 production apps shipped in a single day.
Let's discuss how we can help transform your business with custom solutions and intelligent automation.