Back to Blog

Introducing Ashlr AO: One Command Center for All Your AI Agents

If you're a developer using AI coding agents in 2026, you've probably experienced the same problem we did: chaos. You have Claude Code running in one terminal tab refactoring your auth module, Codex in another writing tests for the API layer, maybe Aider doing a code review in a third. Each agent lives in its own silo. You're constantly switching tabs, losing context, missing when an agent asks a question, and hoping two agents don't try to edit the same file at the same time.

Today we're open-sourcing Ashlr AO — a local-first agent orchestration platform that gives you a single command center to manage all of your AI coding agents, across all of your repositories, from one dashboard.

The Problem: AI Agents Don't Play Well Together

AI coding agents have become remarkably capable. Claude Code can plan and execute multi-file refactors. Codex can generate entire test suites from a single prompt. Aider and Goose bring their own strengths to pair programming and autonomous coding. But the moment you try to run more than one at a time, the experience falls apart.

The core issues are always the same:

We built Ashlr AO to solve all of these problems at once.

The Solution: One Command Center

Ashlr AO is a Python server that runs locally on your machine. It provides a real-time web dashboard where you can spawn, monitor, pause, resume, restart, and kill AI coding agents. Every agent runs in an isolated tmux session, and Ashlr captures their terminal output every second, parses it for status changes, and streams updates to your browser over WebSocket.

The result is a glassmorphic mission control for your AI fleet. Each agent gets a card showing its role, status, current task summary, and git branch. Cards pulse with the agent's role color when working, turn orange when an agent needs your attention, and flash red on errors. Click any card to dive into the full terminal output with ANSI color rendering, an activity feed of file edits and commands, and a scratchpad for notes.

The entire experience is keyboard-driven. Cmd+N to spawn a new agent. Cmd+K for the command palette. Cmd+Shift+A to see every agent waiting for input. Number keys to jump between agents. You can orchestrate a fleet of ten agents without touching your mouse.

Key Features

Multi-Agent Orchestration

Spawn up to 100 concurrent agents (5 on the free Community tier, up to 100 on Pro). Each agent runs in a tmux session with full process isolation. Ashlr supports four backends out of the box: Claude Code, Codex, Aider, and Goose. Claude Code gets first-class support with plan mode toggling, model selection, tool restriction, and stream-json output mode for non-interactive tasks.

Agents are organized by project and git branch. You can filter your fleet by project, branch, status, or free-text search. Focus mode (Cmd+Shift+F) narrows the dashboard to a single project when you need to concentrate.

Real-Time Dashboard

The dashboard is a single HTML file with inline CSS and JavaScript. No React. No build step. No node_modules. It loads instantly and uses WebSocket for real-time updates. The design language is glassmorphism — translucent cards with backdrop blur, inner light highlights, and deep shadows. It looks good and stays out of your way.

The deep view gives you the full terminal output with ANSI color rendering, line numbers, and search. An activity tab parses agent output into structured events: file reads, file edits, bash commands, test results, errors, and git operations. You can see at a glance what an agent has done without scrolling through raw terminal output.

Auto-Pilot

For workflows where you trust the agents to proceed without constant supervision, Ashlr includes auto-pilot capabilities. Auto-restart detects when an agent has stalled (no output for a configurable period) and restarts it with the same task. Auto-approve matches agent questions against safe patterns and automatically responds, with a hardcoded never-approve list that blocks dangerous operations like rm -rf, force pushes, and DROP TABLE commands. Rate limiting prevents runaway auto-approvals.

Health-based auto-pause monitors system resources and pauses agents when memory pressure gets critical. Browser notifications alert you to specific events — errors, waiting agents, completions — so you can step away from the dashboard and still stay in the loop.

Intelligence Layer

When you set an XAI_API_KEY environment variable, Ashlr activates its intelligence layer powered by xAI's Grok. This adds LLM-generated one-line summaries for each agent (replacing the regex-based fallback), natural language command parsing in the command bar ("spawn 3 backend agents on the auth-service repo"), and fleet-level analysis every 30 seconds that detects stuck agents, potential conflicts, and handoff opportunities.

The intelligence layer is entirely optional. Without it, Ashlr falls back to regex-based status detection and summary extraction, which works well for the core experience. The LLM layer is a quality-of-life upgrade, not a requirement.

Cross-Agent Coordination

Ashlr detects when two agents are editing the same file and alerts you in real-time. Per-agent scratchpads let you leave notes that persist across sessions. Auto-handoff lets you configure an agent to automatically spawn a successor when it completes — for example, a backend agent that hands off to a tester agent with context about what was built.

Fleet templates let you define reusable multi-agent configurations ("Full Stack" might spawn a frontend agent, a backend agent, and a tester) and deploy them to any project with one click. Task templates support variables like {branch}, {project_name}, and {date} that are interpolated at deploy time.

Native Desktop App

For developers who prefer a native experience, Ashlr ships as a macOS desktop app built with Tauri v2. The app weighs 5.9MB, ships as a 3.1MB DMG, and includes system tray integration with agent count and fleet status. It manages the Python server as a sidecar process — starting, stopping, and monitoring it automatically. No Electron. No 200MB download.

Technical Decisions

Every tool reflects the tradeoffs its creators were willing to make. Here are ours.

Why Python and aiohttp

Ashlr needs async HTTP, WebSocket support, and good process management. Python's asyncio gives us all three. aiohttp is mature, fast enough for a local orchestration server, and doesn't bring a framework's worth of opinions. We use psutil for process and system metrics, aiosqlite for non-blocking database access, and asyncio.create_subprocess_exec for stream-json mode. The entire server is 22 modules totaling around 15,000 lines of Python.

Why tmux

Each AI coding agent is a CLI process that reads from stdin and writes to stdout. tmux gives us session isolation, the ability to capture pane output programmatically, and the ability to send keystrokes to a session — which is exactly what we need to respond to agent questions. It's battle-tested, available on every macOS and Linux system, and adds zero overhead. We also support a stream-json mode for Claude Code that bypasses tmux entirely and reads structured ndjson output from a subprocess, which is useful for non-interactive batch tasks.

Why SQLite

Ashlr is local-first. A single SQLite file at ~/.ashlr/ashlr.db stores agent history, projects, workflows, fleet templates, and user accounts. No Postgres to configure. No Docker container to run. The database is created automatically on first launch. We use aiosqlite for async access and schema migrations are handled via a versioned migration system in the Database class.

Why No Build Step

The dashboard is vanilla JavaScript. ES2022+ features, no transpilation, no bundling. The CSS uses custom properties and backdrop-filter for glassmorphism. Everything is served as static files by the aiohttp server. This means contributors can edit the dashboard with any text editor and refresh the browser to see changes. No npm install. No webpack config. No waiting for hot module replacement. Just HTML, CSS, and JavaScript.

By the Numbers

We believe in shipping with confidence, and that means testing extensively:

What's Next

Ashlr AO is production-ready today — we use it daily to orchestrate our own development. But we have plans for where to take it next:

Get Started

Ashlr AO is open source and free to use. Install it in one command:

pip install ashlr-ao
ashlr

That's it. The server starts, creates its config at ~/.ashlr/ashlr.yaml, and opens the dashboard at http://127.0.0.1:5111. You'll need Python 3.11+, tmux, and at least one AI coding agent backend installed (Claude Code, Codex, Aider, or Goose).

For the native macOS app, grab the DMG from the GitHub releases page.

We'd love to hear what you think. File issues, submit PRs, or just tell us how you're using it. The repo is at github.com/ashlrai/ashlr-ao.

Your AI agents deserve a command center. Stop juggling terminal tabs.