Connect with me via the floating button
Learn the core definition, boundaries, and success criteria for practical AI agents.
Most teams call anything with a prompt an "agent." That confusion causes bad architecture. A real agent is a goal-driven loop: it looks at context, decides on an action, executes with tools, checks progress, and repeats until done or stopped.
A chatbot answers once. A workflow runs fixed steps. An agent decides between steps based on current state. That difference matters because real work is messy: APIs fail, data is incomplete, and user requests change halfway through execution.
Start with a one-line contract: "Given X context, produce Y output, using only Z tools, within N steps." If you cannot write this, you are not ready to implement.
Use five explicit parts:
export type AgentContract = {
goal: string;
allowedTools: string[];
maxSteps: number;
maxTokens: number;
};
export type AgentState = {
messages: Array<{ role: "user" | "assistant" | "system"; content: string }>;
completed: boolean;
steps: number;
};
Practical tip: define success with a validator, not vibes. For example, "return a valid JSON object with priority, owner, and nextAction" is testable. "Give me a good answer" is not.
export function shouldStop(state: AgentState, maxSteps: number): boolean {
return state.completed || state.steps >= maxSteps;
}
Your first milestone is not clever prompting. It is clear boundaries. If you lock down the contract early, debugging becomes straightforward and every later lesson becomes easier.
Pick one real use case from your week, then write its agent contract in ten lines. Keep the goal specific, keep tools minimal, and set a hard step limit. You now have the blueprint for Lesson 2.
Full access
Unlock all 5 lessons, templates, and resources for AI Agent Fundamentals. Free.