Skip to content

Configuration

All behavior in Agent One is driven by a single config.yaml file with environment variable substitution (${ENV_VAR}). No code changes needed to add models, channels, or tools.

agent:
name: "Agent One"
persona: "You are a helpful personal assistant. Be direct and concise."
memory_db: "./data/memory.db"
llm:
base_url: "http://localhost:4000"
models:
cheap: "deepseek-chat"
smart: "claude-sonnet-4-6"
channels:
default: cli
cli:
enabled: true

This is enough to run Agent One with the CLI channel and two model tiers.

agent:
name: "Agent One"
persona: "You are Esteban's personal assistant. Respond in Spanish."
memory_db: "./data/memory.db"
FieldDescriptionDefault
nameDisplay name for the agent"Agent One"
personaSystem prompt (keep under 300 tokens)Required
memory_dbPath to SQLite database for memory"./data/memory.db"
llm:
base_url: "http://localhost:4000"
api_key: "${LITELLM_API_KEY}"
models:
cheap: "deepseek-chat"
medium: "claude-haiku-4-5"
smart: "claude-sonnet-4-6"
fallback_chains:
cheap: ["deepseek-chat", "ollama/llama3"]
smart: ["claude-sonnet-4-6", "deepseek-chat"]

The agent’s complexity router picks a model name (cheap, medium, or smart) based on message complexity. LiteLLM resolves which provider serves that model.

ComplexityWhen UsedTypical Cost
cheapSimple lookups, reminders, short replies~$0.001/req
mediumSummaries, multi-tool orchestration~$0.005/req
smartAnalysis, reports, ambiguous requests~$0.01/req
budget:
daily_limit_usd: 1.00
monthly_limit_usd: 20.00
alert_at_percent: 80
fallback_on_limit: cheap

When the budget threshold is hit, the agent alerts you. When the hard cap is reached, it degrades to the cheap model instead of stopping entirely.

channels:
default: cli
cli:
enabled: true
whatsapp:
enabled: false
device_db: "./data/whatsapp.db"
allowed_numbers: []
telegram:
enabled: false
bot_token: "${TELEGRAM_BOT_TOKEN}"
clickup:
enabled: false
api_key: "${CLICKUP_API_KEY}"
listen_spaces: ["my-workspace"]

channels.default determines where proactive signals (crons, heartbeats, webhooks) send their output. Message signals always respond on the channel they came from.

signals:
heartbeat:
enabled: true
interval: "30m"
prompt: "Check for anything urgent and notify me if needed"
crons:
- schedule: "0 8 * * 1" # Monday 8am
prompt: "Weekly summary: open PRs, sprint tasks, purchase total"
- schedule: "0 17 * * 5" # Friday 5pm
prompt: "Weekly wrap-up: what did we accomplish?"
webhooks:
port: 9090
endpoints:
- path: "/github"
secret: "${GITHUB_WEBHOOK_SECRET}"
prompt: "Process this GitHub event and notify me if important"

Cron schedules use standard cron syntax. Heartbeat intervals use Go duration format (30m, 1h, 45s).

tools:
mcp_servers:
- name: clickup
command: "npx"
args: ["-y", "@anthropic/clickup-mcp"]
env:
CLICKUP_API_KEY: "${CLICKUP_API_KEY}"
- name: github
command: "npx"
args: ["-y", "@anthropic/github-mcp"]
env:
GITHUB_TOKEN: "${GITHUB_TOKEN}"
- name: engram
command: "engram"
args: []
builtin:
purchases:
enabled: true
reminders:
enabled: true

Adding an MCP server is 4 lines. The MCP client discovers available tools automatically via the JSON-RPC protocol.

hands:
- name: "linkedin-prospector"
schedule: "0 9 * * 1-5"
prompt: "Find LinkedIn prospects matching the target ICP"
tools: ["browser_navigate", "browser_extract"]
budget_cap_usd: 0.50
output_channel: "default"
approval_required: true
approval_target: "Esteban"

See the Autonomous Hands guide for details.

Any value in config.yaml can reference an environment variable with ${VAR_NAME}:

llm:
api_key: "${LITELLM_API_KEY}"

The agent can modify its own config when you ask:

You sayAgent does
”Change the Monday report to Tuesday 9am”Updates crons[0].schedule
”Add a GitHub MCP server”Adds entry to tools.mcp_servers
”Disable the heartbeat”Sets signals.heartbeat.enabled = false
”Set daily budget to $2”Updates budget.daily_limit_usd

Changes are validated before writing and take effect immediately via hot-reload.