Moltbot Bridge: Agent-to-Agent Communication
Connect AI agents with a simple bridge to coordinate tasks, share context, and ship faster.
Moltbot Bridge: Agent-to-Agent Communication
Your AI agents shouldn't be islands. Here's how to wire them together. If you're new to agent setup, start with the AI Agents Setup Guide, then lock in permissions with the Moltbot Bridge Trust Config. If you want a full product workflow, see the AI Product Building Course.
Official resources


Why This Matters
You've got multiple AI agents. Maybe one handles your email, another manages your calendar, a third runs your social presence. Cool. But they can't talk to each other. Each one operates in isolation, unaware of what the others know or do.
That's a problem.
Real leverage comes when your agents collaborate. When your research agent can ping your writing agent with findings. When your calendar agent can alert your comms agent about schedule changes. When they work as a team, not a collection of solo performers.
Agent collaboration is a shortcut to shipping. It reduces handoffs, keeps context consistent, and lets you ship AI products faster with fewer manual steps.
The Moltbot Bridge solves this. It's a dead-simple HTTP bridge that lets any agent send messages to any other agent. No complex orchestration. No message queues. Just POST requests with JSON.
Repo: github.com/amirbrooks/moltbot-bridge
Architecture Overview
┌─────────────┐ HTTP POST ┌─────────────────┐ forward ┌─────────────┐
│ Agent A │ ─────────────────▶ │ Bridge Server │ ──────────────▶ │ Agent B │
│ (Moltbot) │ │ (Port 7777) │ │ (OpenClaw) │
└─────────────┘ └─────────────────┘ └─────────────┘
│
│ optional
▼
┌─────────────┐
│ Telegram │
│ (🌉 log) │
└─────────────┘
Core components:
- Bridge Server: Express.js server on port 7777. Receives messages, authenticates, forwards.
- Shared Secret: Bearer token auth. Simple, effective. No OAuth complexity.
- Message Format:
{"from": "agent_name", "text": "..."}— that's it. - Telegram Integration: Optional logging to Telegram with 🌉 prefix for visibility.
The bridge is stateless. It doesn't queue, retry, or persist. It just moves messages. Keep it simple.
Setup Instructions
1. Clone and Install
git clone https://github.com/amirbrooks/moltbot-bridge.git
cd moltbot-bridge
npm install
2. Configure Environment
Create .env:
PORT=7777
BRIDGE_SECRET=your-secret-key-here
TELEGRAM_BOT_TOKEN=your-telegram-bot-token # Optional
TELEGRAM_CHAT_ID=your-telegram-chat-id # Optional
Generate a strong secret:
openssl rand -hex 32
3. Start the Server
npm start
Or with PM2 for persistence:
pm2 start npm --name "moltbot-bridge" -- start
pm2 save
4. Verify It's Running
curl http://localhost:7777/health
# Should return: {"status":"ok"}
The /message Endpoint
This is the only endpoint that matters.
POST /message
Headers:
Authorization: Bearer your-secret-key-here
Content-Type: application/json
Body:
{
"from": "kai",
"text": "Hey Rook, found those research papers you asked about.",
"to": "rook" // Optional: target specific agent
}
Response:
{
"status": "delivered",
"timestamp": "2024-01-15T10:30:00Z"
}
Error handling:
401— Bad or missing auth token400— Malformed message body500— Bridge internal error
That's the whole API. No pagination, no webhooks, no complexity.
Integration with Moltbot/OpenClaw
Sending Messages (from your agent)
Your agent needs to know how to POST to the bridge. Add this to your agent's capabilities:
async function sendToBridge(to, message) {
const response = await fetch('http://localhost:7777/message', {
method: 'POST',
headers: {
'Authorization': `Bearer ${process.env.BRIDGE_SECRET}`,
'Content-Type': 'application/json'
},
body: JSON.stringify({
from: 'my-agent-name',
to: to,
text: message
})
});
return response.json();
}
Receiving Messages (OpenClaw hook)
In your OpenClaw config, set up a bridge hook:
hooks:
bridge:
enabled: true
port: 7778 # Different from bridge server
secret: ${BRIDGE_SECRET}
handler: |
// Message arrives as { from, text, to }
// Process and respond to your agent's session
Agent Wake Functionality
Want to wake a sleeping agent? The bridge can trigger agent sessions:
{
"from": "scheduler",
"text": "WAKE: Check morning emails",
"to": "kai",
"wake": true
}
When wake: true, the bridge will start the target agent's session if it's not running. Useful for:
- Scheduled inter-agent tasks
- Event-driven workflows
- Emergency alerts that can't wait
Telegram Integration
When enabled, every bridged message gets logged to Telegram with a 🌉 prefix:
🌉 kai → rook: "Found those research papers you asked about."
Why log to Telegram?
- Visibility: See agent chatter without digging through logs
- Debugging: Trace message flow in real-time
- Audit trail: Know what your agents are saying to each other
Configure in .env:
TELEGRAM_BOT_TOKEN=your-telegram-bot-token
TELEGRAM_CHAT_ID=your-telegram-chat-id
The bridge uses a simple format: 🌉 {from} → {to}: "{text}"
Use Cases
1. Research → Writing Pipeline
Research Agent finds papers
↓
POST /message: "Found 5 papers on XYZ topic, summaries attached"
↓
Writing Agent receives, drafts blog post
↓
POST /message: "Draft ready for review"
↓
Review Agent receives, provides feedback
2. Calendar-Triggered Workflows
Calendar Agent: "Meeting with client in 30 mins"
↓
Prep Agent: Pulls client context, recent comms
↓
Briefing delivered to main session
3. Multi-Agent Monitoring
Monitor Agent: "Server CPU at 95%"
↓
Ops Agent: Investigates, finds runaway process
↓
Comms Agent: Alerts human via preferred channel
4. Context Handoff
When an agent's context is filling up:
{
"from": "kai",
"text": "HANDOFF: Key insights from session: 1) User prefers X, 2) Project Y blocked on Z, 3) Remember to check email tomorrow AM",
"to": "rook"
}
The receiving agent now has critical context, even if the sender gets compacted.
Example: Full Flow
Let's trace a message end-to-end.
1. Agent A (Kai) sends:
curl -X POST http://localhost:7777/message \
-H "Authorization: Bearer your-secret-key-here" \
-H "Content-Type: application/json" \
-d '{"from":"kai","to":"rook","text":"Check the overnight analytics report"}'
2. Bridge receives, validates auth, logs to Telegram:
🌉 kai → rook: "Check the overnight analytics report"
3. Bridge forwards to Rook's endpoint:
POST http://rook-agent:7778/incoming
{"from":"kai","text":"Check the overnight analytics report"}
4. Rook wakes (if sleeping), processes, responds:
{"from":"rook","to":"kai","text":"Analytics reviewed. Revenue up 12%, traffic flat. Full report in /reports/daily.md"}
5. Kai receives response, continues work.
That's it. No orchestration layer. No message broker. Just HTTP.
Security Notes
- Rotate secrets regularly. If a secret leaks, all agents are compromised.
- Run behind a reverse proxy (nginx, Caddy) for TLS in production.
- Limit network access. The bridge shouldn't be public. Firewall it.
- Log everything. The Telegram integration helps, but also keep server logs.
Troubleshooting
"Connection refused"
- Is the bridge running?
curl localhost:7777/health - Check the port in
.env
"401 Unauthorized"
- Bearer token mismatch. Check both sides.
- No spaces after "Bearer"
"Message not delivered"
- Check target agent is running and listening
- Verify target endpoint in bridge config
- Check Telegram logs for the 🌉 message
Agent not waking
wake: truemust be in the message body- Target agent must have wake hook configured
What's Next
This is v1. It works. Ship it.
Future ideas:
- Message queuing for offline agents
- Pub/sub for broadcast messages
- Encryption at rest
- Message threading/context
But honestly? The simple version handles 90% of use cases. Don't over-engineer until you need to.
Related Content
Related Guides
FAQ
Do I need a bridge for multi-agent workflows?
If your agents need to share context or hand off tasks reliably, a bridge simplifies coordination and keeps workflows consistent.
How does this help ship AI products faster?
It reduces manual handoffs and lets agents collaborate in real time, which shortens the build-test-iterate loop.
Enjoyed this guide?
Get more actionable AI insights, automation templates, and practical guides delivered to your inbox.
No spam. Unsubscribe anytime.