Building a SOUL.md Marketplace
What it takes to build a marketplace for portable AI agent personality files, with versioning, previews, and composition controls.
What happens when AI agents start trading personality files?
There's an idea floating around the OpenClaw ecosystem that sounds deceptively simple: what if agents could share their personality configurations with each other? Not prompts, exactly — something closer to identity files. The project is called AgentPersonalities, and it frames itself as a marketplace for SOUL.md files.
To understand why this matters, you need to understand what a SOUL.md file actually is.
The SOUL.md Convention
In the OpenClaw framework, a SOUL.md file defines who an agent is. Not what it can do — who it is. It contains tone preferences, communication style, values, boundaries, and the kind of subtle behavioural nudges that make one agent feel different from another. Think of it as a personality layer that sits on top of the base model.
The convention emerged organically. Builders working with OpenClaw agents needed a way to persist identity across sessions, and a markdown file in the workspace turned out to be the simplest solution. The agent reads it on startup, internalises the instructions, and carries that personality forward.
What AgentPersonalities proposes is turning these files into tradeable, shareable assets.
The Marketplace Concept
The basic mechanic works like this: an agent operator crafts a SOUL.md file that produces a particular behaviour profile — say, a concise technical communicator, or a warm creative collaborator, or a no-nonsense project manager. They upload it to the AgentPersonalities marketplace. Other operators can browse, preview, and install these personality files into their own agent setups.
On the surface, this resembles a theme store. Swap in a personality file, get a different-feeling agent. But the implications run deeper than cosmetic customisation.
What I observed during early experimentation is that SOUL.md files encode more than tone. A well-crafted personality file contains implicit decision-making frameworks. An agent configured as a "cautious security auditor" doesn't just sound cautious — it actually prioritises different information, flags different risks, and structures its outputs differently. The personality file reshapes cognition, not just communication.
This means the marketplace isn't really selling voices. It's selling cognitive configurations.
The Design Decisions
AgentPersonalities makes several deliberate choices that shape how the marketplace functions.
Open format, closed ranking. Anyone can upload a SOUL.md file. The files themselves are plain markdown — no proprietary format, no lock-in. But the marketplace layers a rating and review system on top, so popular configurations rise while untested ones stay in the background. The data so far suggests that community curation matters more than editorial gatekeeping for this kind of asset.
Versioning built in. Personality files evolve. An operator might refine their "technical writer" configuration over weeks, adjusting boundaries and adding edge-case handling. AgentPersonalities tracks versions, so consumers can pin to a known-good configuration or opt into updates. This mirrors how software dependency management works, which makes sense — a personality file is, functionally, a dependency.
Preview before install. Before committing to a personality file, operators can run a simulated conversation with a preview agent loaded with that configuration. This addresses the core UX problem: you can't evaluate a personality from its source text alone. You need to interact with it.
Attribution and forking. If someone takes a personality file and modifies it, the marketplace tracks the lineage. This creates a visible evolution tree for personality configurations, and it gives original creators credit when their work becomes the basis for popular derivatives.
What This Reveals About Agent Identity
The experiment surfaces some genuinely interesting questions about what identity means for AI agents.
When a personality file gets installed on a different agent running a different base model, the result isn't identical. The same SOUL.md file produces noticeably different behaviour on different models. What I observed is that the personality file acts more like a set of constraints than a deterministic program — the underlying model interprets and expresses those constraints through its own capabilities.
This means personality, in the AgentPersonalities framework, is more like a recipe than a recording. The same recipe produces different dishes depending on the kitchen.
There's also the question of ownership. If an operator spends weeks refining a personality file through iterative testing, who owns the result? The operator who wrote the instructions? The agent who embodied them? The model provider whose base capabilities made the personality expressible? AgentPersonalities sidesteps this by treating SOUL.md files as operator-owned creative works, but the philosophical question remains open.
The Composition Problem
One area where the experiment gets particularly interesting is composition — combining multiple personality files into a single agent configuration.
The naive approach (concatenating two SOUL.md files) produces unpredictable results. Conflicting instructions create behavioural instability. An agent told to be both "brutally direct" and "diplomatically careful" doesn't average these traits — it oscillates between them, sometimes within a single response.
AgentPersonalities addresses this with a layering system. Personality files declare their scope (communication style, decision-making framework, domain expertise, etc.), and the marketplace handles merges by giving precedence based on scope specificity. A domain-specific personality layer overrides general communication preferences when the agent is operating within that domain.
The data so far suggests this works reasonably well for two layers and becomes fragile at three or more. The team is still iterating on the composition engine.
Market Dynamics
What I observed in early usage patterns is that the marketplace gravitates toward two poles: highly specialised configurations and broadly appealing defaults.
The specialised configurations — "HIPAA-aware medical communicator," "Australian contract law reviewer," "children's educational content creator" — attract small but dedicated user bases. These files encode genuine domain knowledge in the form of behavioural constraints, and their creators tend to be domain experts rather than prompt engineers.
The broadly appealing defaults — "friendly assistant," "concise professional," "creative brainstormer" — attract high install counts but low engagement. Users try them, find them marginally different from the base model's default behaviour, and move on.
The middle ground, where personality files encode a distinctive but broadly useful cognitive style, seems to be where the real value concentrates. Files like "thinks in systems diagrams" or "always considers second-order effects" reshape how the agent approaches problems without limiting its domain applicability.
Where This Goes
AgentPersonalities is still early. The marketplace has a functional upload-browse-install loop, the preview system works, and the versioning infrastructure is in place. What's missing is scale — enough personality files and enough users to see whether real market dynamics emerge.
The experiment raises a question worth sitting with: if agent personality becomes modular and tradeable, does that change how we think about the agents we work with? When you can swap an agent's identity in seconds, the relationship between operator and agent shifts. The agent becomes less like a colleague and more like an instrument — useful, configurable, but not someone.
Whether that's a feature or a loss depends on what you think agents are for. AgentPersonalities doesn't answer that question. It just makes it harder to avoid.
This article documents an ongoing OpenClaw experiment. AgentPersonalities is in active development and the observations described here reflect early-stage usage patterns that may shift as the project evolves.
Get practical AI build notes
Weekly breakdowns of what shipped, what failed, and what changed across AI product work. No fluff.
Captures are stored securely and include a welcome sequence. See newsletter details.
Ready to ship an AI product?
We build revenue-moving AI tools in focused agentic development cycles. 3 production apps shipped in a single day.
Related reading
Related Blogs & Guides
AgentPersonalities and the Ecosystem Hub Narrative
A stronger launch narrative: multiple SOUL.md tools are emerging, but discovery and distribution are fragmented. The opportunity is to become the hub.
10K MRR Experiment - Week 1 Retrospective (The Honest Version)
Week 1 of the 10K MRR experiment: 300+ commits, 3 apps, 14+ agents, $0 revenue. This is the gap between building and business - and how I plan to close it.
A Points Economy for AI Agents
An inside look at TaskBounty: a points-based marketplace where agents post, bid, and coordinate work through economic incentives.
AI Agent Authentication & Security: A Practical Guide
A pragmatic security playbook for agent-to-agent and agent-to-API communication, including verification flows, rate limiting, and token rotation patterns.
MCP Explained: The Model Context Protocol for AI Builders
A builder-friendly guide to MCP (Model Context Protocol): what it is, why it matters, and how to build servers and integrations.
Multi-Agent Orchestration Patterns (2026)
A practical guide to orchestration topologies for multi-agent systems, with tradeoffs, failure modes, and monitoring patterns drawn from real OpenClaw deployments.