Connect with me via the floating button
Add tool contracts, lightweight memory, and safety checks without over-engineering.
As soon as an agent can act, safety and consistency matter more than raw intelligence. This lesson gives you a practical stack: strict tools, minimal memory, and hard guardrails.
Every tool should have one job, one schema, and one permission level. If a tool can "do many things," the model will misuse it.
import { z } from 'zod';
const sendEmailInput = z.object({
to: z.string().email(),
subject: z.string().min(3).max(120),
body: z.string().min(20).max(4000),
});
type SendEmailInput = z.infer<typeof sendEmailInput>;
Validate before execution. Never trust model-generated arguments.
You usually need two memory types:
Keep long-term memory tiny and explicit. Store facts, not full transcripts.
type UserMemory = {
preferredTone?: 'formal' | 'friendly';
timezone?: string;
approvedDomains: string[];
};
Retrieve memory at the start of the loop, then write only validated updates at the end.
Guardrails should be enforced in code, not just prompt text.
function assertDomainAllowed(email: string, allowedDomains: string[]) {
const domain = email.split('@')[1]?.toLowerCase();
if (!domain || !allowedDomains.includes(domain)) {
throw new Error('GuardrailBlocked: domain_not_allowed');
}
}
Start with read-only tools first. Once logs show stable behavior, enable write actions behind approval flags. This staged release prevents expensive mistakes and gives you observable confidence. Document each guardrail decision in your runbook for faster incident response.
Take your Lesson 3 agent and add one guarded write action, one memory lookup, and one schema-validated tool input. Then run five "bad" test cases (invalid email, blocked domain, missing memory, malformed tool input, replayed request). Your agent should fail safely every time.
Full access
Unlock all 5 lessons, templates, and resources for AI Agent Fundamentals. Free.