Skip to content

Installation

Agent One requires two things to run:

  1. Go 1.22+ — the agent compiles to a single static binary
  2. LiteLLM — a self-hosted OpenAI-compatible proxy that routes to any LLM provider

Download from go.dev/dl or use your package manager:

Terminal window
# macOS
brew install go
# Ubuntu/Debian
sudo apt install golang-go
# Verify
go version # should print go1.22 or higher

LiteLLM is a Python proxy that exposes a single OpenAI-compatible endpoint and routes to 100+ LLM providers. Agent One talks only to LiteLLM — never directly to a provider.

Terminal window
pip install litellm

Create a litellm_config.yaml:

model_list:
- model_name: cheap
litellm_params:
model: deepseek/deepseek-chat
api_key: os.environ/DEEPSEEK_API_KEY
- model_name: medium
litellm_params:
model: anthropic/claude-haiku-4-5
api_key: os.environ/ANTHROPIC_API_KEY
- model_name: smart
litellm_params:
model: anthropic/claude-sonnet-4-6
api_key: os.environ/ANTHROPIC_API_KEY

Start the proxy:

Terminal window
litellm --config litellm_config.yaml
# Runs on http://localhost:4000
Terminal window
git clone https://github.com/emutis/agent-one.git
cd agent-one
go build -o agent-one ./cmd/agent

This produces a single static binary with no external dependencies.

Terminal window
# Run with the CLI channel (interactive terminal)
./agent-one
# Or run with the Bubbletea TUI
./agent-one --tui
# Or specify a custom config path
./agent-one --config /path/to/config.yaml

On first run, Agent One creates a config.yaml in the current directory (if one doesn’t exist) and initializes a SQLite database for memory at ./data/memory.db.

During development, you can skip the build step:

Terminal window
go run ./cmd/agent # CLI mode
go run ./cmd/agent --tui # TUI mode

Once running, type a message in the CLI:

> Hello, are you working?

If LiteLLM is running and configured, the agent responds. If not, you’ll see an error in the logs pointing to the connection issue.