Ashlr AO Documentation
Everything you need to orchestrate AI coding agents from a single command center.
What is Ashlr AO?
Ashlr AO is a local-first agent orchestration platform. It gives a single developer (or team) the ability to spawn, monitor, and manage multiple AI coding agents across multiple repositories from one dashboard.
Supported backends include Claude Code, OpenAI Codex, Aider, and Goose. Each agent runs in an isolated tmux session with real-time terminal capture, status detection, and output parsing.
Current version: v1.6.1 — 22 Python modules, 1,926 tests, production-ready.
Quick Links
Installation & Setup
Install via pip, run from source, or use Docker. Spawn your first agent in under 30 seconds.
Configuration
Full reference for ~/.ashlr/ashlr.yaml — server, agents, backends, LLM, auto-pilot, and more.
API Reference
REST endpoints, WebSocket protocol, request/response examples for every route.
Deployment
Docker Compose with Caddy HTTPS, production environment variables, multi-user auth.
Desktop App
Native macOS app via Tauri v2 — 5.9MB binary, system tray, sidecar server.
Core Concepts
Agents
An agent is a running instance of an AI coding tool (Claude Code, Codex, etc.) managed by Ashlr. Each agent gets its own tmux session, working directory, role, and task. Terminal output is captured every second, parsed for status (planning, working, waiting, error, idle), and broadcast to the dashboard in real time via WebSocket.
Projects
Projects map to git repositories on your local machine. When you spawn an agent, you assign it to a project. This enables per-project filtering, focus mode, branch tracking, and fleet templates. Projects auto-detect git remotes, branches, and metadata.
Workflows
Workflows define a DAG (directed acyclic graph) of agent tasks with dependency ordering. You specify agent specs with depends_on relationships, and Ashlr executes them in the correct order. Available on the Pro tier.
Backends
A backend is the CLI tool that powers an agent. Ashlr ships with support for four backends:
| Backend | Command | Notes |
|---|---|---|
claude-code | claude | Plan mode, model selection, tool restriction, stream-json output |
codex | codex | OpenAI Codex CLI |
aider | aider | AI pair programming |
goose | goose | Autonomous coding agent by Block |
Roles
Roles provide a visual label and color for agents in the dashboard. Nine built-in roles cover common development tasks:
- Frontend Engineer — UI/UX work
- Backend Engineer — API and server logic
- DevOps Engineer — Infrastructure and deployment
- QA / Tester — Testing and quality assurance
- Code Reviewer — Code review and standards
- Security Auditor — Security analysis
- Architect — System design and architecture
- Documentation — Docs and technical writing
- General Purpose — Any task
Architecture Overview
Ashlr AO is a modular Python package (ashlr_ao) with 22 focused modules totaling approximately 15,500 lines of code. The dashboard is a single HTML file with inline CSS and JS — no build step required.
| Layer | Technology | Purpose |
|---|---|---|
| Server | Python 3.11+ / aiohttp | Async HTTP + WebSocket, process management via psutil |
| Frontend | Vanilla JS (ES2022+) | Zero dependencies, no build step |
| Persistence | SQLite via aiosqlite | Zero-config single-file database |
| Process mgmt | tmux | Session isolation, output capture |
| Licensing | PyJWT + Ed25519 | Offline-first signed JWT, no phone-home |
| Intelligence | xAI Grok (optional) | Summaries, NLU parsing, fleet analysis |
| Desktop | Tauri v2 | Native macOS app, system tray, sidecar server |
Licensing
Ashlr AO uses an open-core licensing model:
| Feature | Community (Free) | Pro (Paid) |
|---|---|---|
| Concurrent agents | Up to 5 | Up to 100 |
| Users / seats | 1 | Up to 50 |
| Core orchestration | Full | Full |
| Intelligence (LLM) | Gated | Included |
| Workflows | Gated | Included |
| Fleet presets | Gated | Included |
| Multi-user auth | Gated | Included |