Building Your First AI Agent in TypeScript
Build a fully functional AI agent in TypeScript using Claude, Next.js, Convex, and OpenClaw — from project setup to tool execution and a working chat UI.
Building your first AI agent can feel overwhelming — there are LLM APIs, tool frameworks, persistence layers, and deployment concerns all competing for your attention. This tutorial cuts through the noise. You'll build a real, working agent in TypeScript that talks to Claude, persists conversations, and executes tools.
For a broader look at what AI agents are and the patterns behind them, check out our complete guide to building AI agents in 2026.
Prerequisites
- Node.js 20+ installed
- Basic TypeScript knowledge
- A Claude API key
- OpenClaw installed locally
- Familiarity with Next.js 16 routing (App Router)
What you'll build
You'll build a minimal but fully working AI agent in TypeScript that:
- Accepts user input via a Next.js API route
- Uses Claude API for reasoning and responses
- Persists conversations in Convex
- Executes safe local actions via OpenClaw tools
- Streams responses to the UI
By the end, you'll have an agent you can run locally and extend into a production-grade system.
1) Create the project scaffold
Initialize a Next.js 16 app with TypeScript:
npx create-next-app@latest ai-agent-demo --ts --app
cd ai-agent-demo
Install dependencies:
pnpm add @anthropic-ai/sdk convex zod
pnpm add -D @types/node
If you don't have pnpm installed, run npm install -g pnpm first.
Initialize Convex:
npx convex dev
This creates a convex/ folder with schema and functions.
2) Add environment variables
Create .env.local:
ANTHROPIC_API_KEY=YOUR_ANTHROPIC_API_KEY
CONVEX_URL=YOUR_CONVEX_URL
You'll also need to add Convex keys when you deploy, but local dev is enough for now. When you're ready for production, see our guide on deploying AI apps to Vercel and Convex.
3) Define your Convex schema
Create convex/schema.ts:
import { defineSchema, defineTable } from "convex/server";
import { v } from "convex/values";
export default defineSchema({
conversations: defineTable({
title: v.string(),
createdAt: v.number(),
}),
messages: defineTable({
conversationId: v.id("conversations"),
role: v.union(v.literal("user"), v.literal("assistant"), v.literal("tool")),
content: v.string(),
createdAt: v.number(),
}).index("by_conversation", ["conversationId"]),
});
Create convex/messages.ts:
import { mutation, query } from "convex/server";
import { v } from "convex/values";
export const createConversation = mutation({
args: { title: v.string() },
handler: async (ctx, args) => {
const id = await ctx.db.insert("conversations", {
title: args.title,
createdAt: Date.now(),
});
return id;
},
});
export const addMessage = mutation({
args: {
conversationId: v.id("conversations"),
role: v.union(v.literal("user"), v.literal("assistant"), v.literal("tool")),
content: v.string(),
},
handler: async (ctx, args) => {
await ctx.db.insert("messages", {
conversationId: args.conversationId,
role: args.role,
content: args.content,
createdAt: Date.now(),
});
},
});
export const listMessages = query({
args: { conversationId: v.id("conversations") },
handler: async (ctx, args) => {
return await ctx.db
.query("messages")
.withIndex("by_conversation", (q) => q.eq("conversationId", args.conversationId))
.collect();
},
});
4) Create a Claude client
Add lib/claude.ts:
import Anthropic from "@anthropic-ai/sdk";
export const claude = new Anthropic({
apiKey: process.env.ANTHROPIC_API_KEY!,
});
This uses the official Anthropic SDK. Make sure your API key is valid and billing is enabled on your Anthropic account.
5) Add an agent prompt and tool schema
Create lib/agent.ts:
import { z } from "zod";
export const agentSystemPrompt = `
You are a helpful AI agent for Amir Brooks.
You can call tools to perform safe local actions.
Always explain your steps briefly.
`;
export const toolSchema = z.object({
name: z.string(),
input: z.record(z.any()),
});
The system prompt is where your agent's personality lives. For advanced techniques on crafting agent identity, see the SOUL.md pattern for AI agent personality.
We use Zod for runtime validation of tool calls — this prevents the agent from hallucinating invalid tool structures.
6) Build the agent API route
Create app/api/agent/route.ts:
import { NextRequest } from "next/server";
import { claude } from "@/lib/claude";
import { agentSystemPrompt } from "@/lib/agent";
import { ConvexHttpClient } from "convex/browser";
import { api } from "@/convex/_generated/api";
const convex = new ConvexHttpClient(process.env.CONVEX_URL!);
export async function POST(req: NextRequest) {
const { conversationId, message } = await req.json();
// Save user message
await convex.mutation(api.messages.addMessage, {
conversationId,
role: "user",
content: message,
});
// Call Claude
const completion = await claude.messages.create({
model: "claude-3-5-sonnet-20241022",
max_tokens: 800,
system: agentSystemPrompt,
messages: [{ role: "user", content: message }],
});
const assistantText = completion.content
.filter((c) => c.type === "text")
.map((c) => c.text)
.join("\n");
await convex.mutation(api.messages.addMessage, {
conversationId,
role: "assistant",
content: assistantText,
});
return Response.json({ text: assistantText });
}
This is a minimal agent — no tools yet. Let's add OpenClaw tool calls next.
7) Add OpenClaw tool integration
In a real setup, OpenClaw exposes tool calls via its runtime. Here's a small mock that you can replace with OpenClaw's actual invocation when you wire it up.
Create lib/tools.ts:
export type ToolCall = {
name: "read" | "write" | "web_fetch";
input: Record<string, any>;
};
export async function runTool(call: ToolCall) {
switch (call.name) {
case "read":
return { result: "[mocked] file contents" };
case "write":
return { result: "[mocked] wrote file" };
case "web_fetch":
return { result: "[mocked] fetched page" };
default:
throw new Error(`Unknown tool: ${call.name}`);
}
}
Now update app/api/agent/route.ts to detect tool requests. We'll do this with a lightweight convention: if the assistant outputs a JSON block containing tool_calls, we execute them.
import { runTool } from "@/lib/tools";
function extractToolCalls(text: string) {
try {
const jsonStart = text.indexOf("{");
if (jsonStart === -1) return [];
const parsed = JSON.parse(text.slice(jsonStart));
return parsed.tool_calls ?? [];
} catch {
return [];
}
}
export async function POST(req: NextRequest) {
const { conversationId, message } = await req.json();
await convex.mutation(api.messages.addMessage, {
conversationId,
role: "user",
content: message,
});
const completion = await claude.messages.create({
model: "claude-3-5-sonnet-20241022",
max_tokens: 800,
system: agentSystemPrompt,
messages: [{ role: "user", content: message }],
});
const assistantText = completion.content
.filter((c) => c.type === "text")
.map((c) => c.text)
.join("\n");
const toolCalls = extractToolCalls(assistantText);
const toolResults = [] as string[];
for (const call of toolCalls) {
const result = await runTool(call);
toolResults.push(JSON.stringify(result));
}
const finalText = toolResults.length
? `${assistantText}\n\nTools executed:\n${toolResults.join("\n")}`
: assistantText;
await convex.mutation(api.messages.addMessage, {
conversationId,
role: "assistant",
content: finalText,
});
return Response.json({ text: finalText });
}
In production, replace runTool with OpenClaw's runtime API so the agent can actually execute safe actions.
8) Build a simple UI
Create app/page.tsx:
"use client";
import { useState } from "react";
export default function Home() {
const [input, setInput] = useState("");
const [messages, setMessages] = useState<string[]>([]);
async function send() {
if (!input.trim()) return;
const res = await fetch("/api/agent", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({
conversationId: "demo",
message: input,
}),
});
const data = await res.json();
setMessages((prev) => [...prev, `You: ${input}`, `Agent: ${data.text}`]);
setInput("");
}
return (
<main className="p-6 max-w-2xl mx-auto space-y-4">
<h1 className="text-2xl font-bold">AI Agent Demo</h1>
<div className="space-y-2">
{messages.map((m, i) => (
<div key={i} className="p-2 bg-gray-100 rounded">
{m}
</div>
))}
</div>
<div className="flex gap-2">
<input
value={input}
onChange={(e) => setInput(e.target.value)}
className="border p-2 flex-1"
valueHint="Ask the agent..."
/>
<button onClick={send} className="bg-black text-white px-4">
Send
</button>
</div>
</main>
);
}
9) Run the agent
npx convex dev
pnpm dev
Open http://localhost:3000 and chat with your agent.
10) Troubleshooting tips
- Missing Convex URL: Make sure
.env.localcontainsCONVEX_URL. - Claude API errors: Confirm
ANTHROPIC_API_KEYis valid and billing is enabled on your Anthropic account. - No messages stored: Check the Convex dashboard logs and schema deployment.
- Tool calls ignored: Ensure your assistant outputs the JSON format your parser expects.
Next steps
- Replace the mock
runToolwith real OpenClaw tool execution - Add streaming responses using
ReadableStream - Add agent memory summaries per conversation
- Add authentication to protect your agent routes — see Convex Auth for AI Apps
- Introduce a proper tool selection layer with schemas
You now have a working TypeScript AI agent that you can extend into a production system. If you want to take this further — auth, memory, multi-agent orchestration, and safety — the AI Agent Masterclass picks up right where this tutorial leaves off.
Related reading
Enjoyed this guide?
Get more actionable AI insights, automation templates, and practical guides delivered to your inbox.
No spam. Unsubscribe anytime.
Ready to ship an AI product?
We build revenue-moving AI tools in focused agentic development cycles. 3 production apps shipped in a single day.