From n8n to Agents: A Practical Migration Guide
A practical guide you can use to move one workflow into an agent without breaking your business.
Introduction
Last week I said n8n is obsolete. A few people reached out with the obvious follow-up: "Okay, but what does a real migration look like?"
This is that answer. Not philosophy. Not a vision piece. A practical guide you can use to move one workflow into an agent without breaking your business.
If you've built workflows in n8n, Zapier, or Make, you already understand the value: fast automation, clear logic, and no-code-ish convenience. The problem isn't that those tools stopped working. The problem is that the complexity they create doesn't scale with how fast APIs and requirements now change.
The goal here is not "replace everything." It's to show you when and how agents actually make your life easier, where they are better, and where they're not. We'll use a real lead enrichment pattern that broke on a tiny API change, and a real tool (my gog CLI) to show how tool-based agents actually work in practice.
Why Workflows Hit a Ceiling
Workflow builders are incredible when the logic is stable and predictable. They hit a ceiling when the world outside your workflow shifts.
The maintenance trap looks like this:
- Every API change requires manual updates.
- Every edge case needs explicit branches.
- Debugging means tracing through nodes to find the first incorrect assumption.
- Testing means running the whole chain end-to-end.
That last point is subtle. When a workflow breaks, you often have to re-run the entire chain to isolate the issue, because the problem could be upstream. With a visual workflow, the logic is explicit, but the state isn't. That makes real debugging slower than it should be.
Here is a real example from my own setup.
I built a lead enrichment workflow in n8n. It wasn't complicated:
- Webhook trigger
- HTTP request to an enrichment API
- Function node to score the lead
- Three IF branches to route by score
- Slack notifications for each route
- Error handling nodes at critical points
Twelve nodes. Clear. Maintainable.
Then the enrichment API changed one field: company.size moved to company.employee_count.
That is all it took. The workflow didn't crash. It quietly fell back to "enrichment failed" and routed leads to the wrong queue. I noticed two days later when I saw a pattern in the logs.
Fixing it took 45 minutes: find the break, update the mapping, run tests, re-deploy. This happened multiple times across different integrations. Not because I built the workflow wrong, but because the workflow was brittle by design. You have to pre-program every path. When the API changes, the pre-programmed path breaks.
The core problem is not that workflows are bad. It is that workflows force you to encode the HOW. You are programming the implementation path, not the outcome. When the implementation shifts, your logic collapses.
What Agents Do Differently
Agents flip the model. You describe the outcome and provide tools. The model chooses the path at runtime.
Workflow thinking:
"If field X exists, map to Y, else try Z. If that fails, route to fallback."
Agent thinking:
"Extract the company name, score the lead based on size and role, then route it."
The same lead enrichment use case becomes a single prompt:
"Process this new lead. Enrich with company data. Score based on company size, industry match, and role seniority. Route 80+ to enterprise, 50-79 to mid-market, below 50 to SMB. Notify via Slack with a score breakdown."
The tools do the heavy lifting. The model decides when and how to use them.
A concrete example: I have a CLI tool called gog that gives programmatic access to Google Workspace (Gmail, Calendar, Drive, Sheets). When I ask, "What's on my calendar tomorrow?" the model calls:
gog calendar list --date tomorrow
I didn't write a workflow that says "if the user asks about the calendar, run this command." I wrote a tool definition that says "this tool reads calendar events." The model matches intent to capability and runs the command when it needs it.
That same model applies to your business tools. CRM, enrichment APIs, Slack, Stripe. Define the tool once. Let the agent use it when the intent requires it.
This is the key shift: you describe capabilities, not paths. The model does the routing.
The trade-offs are real
Agents are not magic. They trade control for adaptability:
- Predictability: Agents are probabilistic. The same input can produce slightly different output.
- Debugging: You don't trace node execution; you inspect model outputs and logs.
- Cost: Token costs scale with usage and complexity.
- Trust: You are delegating decisions to a model. That can be uncomfortable.
If you need deterministic behavior and strict audits, workflows still win. If your problem is constant edge cases and API drift, agents win.
Step-by-Step: Convert One Workflow
Here is the actual migration path I recommend. Do not skip steps.
Step 1: Pick the right candidate
Good candidates look like this:
- High maintenance (breaks often)
- Lots of conditional branches
- Inputs that change format
- Not mission-critical
Bad candidates look like this:
- High volume, low latency tasks
- Strict audit requirements
- Simple transformations (you do not need reasoning)
- Anything that would be costly if the output varies slightly
If you're unsure, start with a low-stakes workflow that has already caused you pain. You want quick feedback without risk.
Step 2: Write what it DOES (not how)
Turn your workflow into plain English. Focus on inputs, outputs, and goals. Ignore implementation details.
Example from the lead enrichment workflow:
"When a new lead comes in, enrich it with company data. Score it based on company size, industry match, and role seniority. Route to the right team and send a Slack summary with the score breakdown."
This becomes the foundation of your agent prompt. If you cannot describe the workflow clearly, you are not ready to migrate it.
Step 3: Define the tools
List every external service the workflow touches. Each one becomes a tool.
For the lead enrichment example:
- Enrichment API
- CRM or database (to store the enriched lead)
- Slack (to notify the team)
A tool definition is just a name, a description, and parameters. The most important part is the description. That is how the model decides when to use the tool.
If you already have CLI tools (like gog for Google Workspace), you can wrap them as tools without rewriting everything. The agent just needs a clear description of what each tool does.
Step 4: Build the agent prompt
Start with the plain English description. Then add constraints and output requirements. Here is a minimal pattern:
[TASK]
Process new lead submissions. Enrich with company data, score by fit, and route to the appropriate team.
[TOOLS AVAILABLE]
- enrichment_lookup: Fetches company data from the enrichment API
- slack_message: Sends a message to Slack
- crm_create_lead: Creates or updates a lead in the CRM
[CONSTRAINTS]
- Score 0-100 based on: company size, industry match, role seniority
- Route: 80+ -> enterprise, 50-79 -> mid-market, <50 -> SMB
- If enrichment fails, still route and flag for manual review
[OUTPUT]
- Confirmation of routing with score breakdown
- Lead record created or updated in CRM
This is where you bake in the rules that would have been conditionals in a workflow. The difference is that you are not prescribing the exact steps. You are describing what "good" looks like.
Step 5: Test with real data
Do not test on toy inputs. Run the agent on real historical data and compare results to your workflow output.
- Look for failures, not stylistic differences.
- Expect some outputs to be better than the workflow (agents can reason through missing data).
- Track any surprising behavior and adjust the prompt.
If you have 10-20 historical inputs, that is enough to surface the biggest issues.
Step 6: Run in parallel
Before you shut off the workflow, run both systems on live inputs for a week. Compare outputs. Build confidence. This gives you a rollback path if the agent does something unexpected.
Step 7: Monitor and iterate
Agents improve over time if you treat them like software, not magic.
- Log tool calls and outputs.
- Review failures weekly.
- Refine tool descriptions when the model makes bad choices.
- Update prompts when your business rules change.
This is a loop, not a one-time migration.
When NOT to Use Agents
Agents are not a universal upgrade. Keep workflows for:
- High-volume, low-latency tasks. Token costs add up.
- Strict audit requirements. You need deterministic paths.
- Simple transformations. If it's just mapping fields, a workflow is fine.
- Security-critical workflows. Payments, access control, or compliance tasks may require explicit control.
The honest take: agents are better for complexity and adaptability. Workflows are better for speed and certainty. Use the tool that matches the constraint, not the trend.
Resources
If you want to go deeper, start here:
- MCP overview: modelcontextprotocol.io
- Example MCP servers: github.com/modelcontextprotocol/servers
- Claude tool use docs: docs.anthropic.com
And if you want a printable version of this guide with templates and checklists, download the cheatsheet at the end of this post.
Conclusion
The shift is simple but deep: implementation -> intent.
Workflow builders are still useful. But when your automations are breaking because the world changes faster than your nodes, agents give you a new way to build.
Start with one workflow. Pick something that already hurts. Move it to an agent, run it in parallel, and see what changes.
If you do, I want to hear about it. Tag me on LinkedIn and share what you migrated and what surprised you.
Download the "Workflow -> Agent Migration Cheatsheet" here: /resources/workflow-migration-cheatsheet
Get practical AI build notes
Weekly breakdowns of what shipped, what failed, and what changed across AI product work. No fluff.
Captures are stored securely and include a welcome sequence. See newsletter details.
Ready to ship an AI product?
We build revenue-moving AI tools in focused agentic development cycles. 3 production apps shipped in a single day.
Related Blogs & Guides
I Launched 3 AI Agent Apps in One Day With Zero Lines of Code
One day, three app builds, 159 commits, and zero manual coding. What shipped, what did not, and what this says about agentic delivery in practice.
Why Every Solo Founder Needs an AI Agent (Not Just ChatGPT)
ChatGPT is a tool. An AI agent is an employee. Here's why the distinction matters and how to make the switch.
AI Automation Readiness Checklist: 12 Questions to Ask Before You Start
Use this AI automation readiness checklist to validate workflows, data quality, ownership, and ROI before you invest in tools.
How to Choose Between AI Agent Frameworks in 2026
A practical comparison of AI agent frameworks — LangChain, CrewAI, AutoGen, Semantic Kernel, and building from scratch — with decision criteria for builders.
Getting Started with MCP (Model Context Protocol): A Practical Guide
MCP is changing how AI agents connect to tools and data. Here's a practical guide to understanding, implementing, and building with the Model Context Protocol.
Building Production AI Agents: Lessons from 300+ Commits
Hard-won lessons from building and deploying 14+ AI agents in production — error handling, monitoring, cost management, and the patterns that actually work.