Deployment
Amurg is designed for self-hosted deployment with zero external dependencies. This guide covers local development, Docker Compose, and production deployment with reverse proxy configuration.
Prerequisites
| Component | Requirement |
|---|---|
| Go | 1.25+ (for hub and runtime) |
| Node.js | 20+ (for UI build only) |
| Docker | 20+ with Compose v2 (for containerized deployment) |
| OS | Linux, macOS, or Windows (Go compiles to static binaries) |
Local Development
Build from source
# Clone the repo
git clone https://github.com/amurg-ai/amurg.git
cd amurg
# Build all binaries
make build
# Or build individually
go build -o bin/amurg-hub ./hub/cmd/amurg-hub
go build -o bin/amurg-runtime ./runtime/cmd/amurg-runtime Run the hub
# Start the hub (default port 8090 for local dev)
./bin/amurg-hub -config deploy/hub-config.local.json
The default local config uses in-memory SQLite, creates an admin/admin user, and listens on port 8090.
Run the runtime
# Start a runtime with a local config
./bin/amurg-runtime -config deploy/runtime-config.local.json Run the UI (dev mode)
cd ui
npm install
npm run dev
# Vite dev server on http://localhost:3000
# Proxies /api and /ws to hub on port 8090 Docker Compose
The quickest way to run a complete Amurg stack. The compose file defines a hub and runtime service with health checks and resource limits.
services:
hub:
build:
context: .
dockerfile: hub/deploy/Dockerfile
ports:
- "8080:8080"
volumes:
- ./hub/deploy/config.example.json:/etc/amurg/config.json:ro
- hub-data:/var/lib/amurg/data
restart: unless-stopped
healthcheck:
test: ["CMD", "wget", "-qO-", "http://localhost:8080/healthz"]
interval: 30s
timeout: 5s
start_period: 10s
retries: 3
deploy:
resources:
limits:
memory: 512M
cpus: "1.0"
networks:
- frontend
- backend
runtime:
build:
context: .
dockerfile: runtime/deploy/Dockerfile
volumes:
- ./runtime/deploy/config.example.json:/etc/amurg/config.json:ro
depends_on:
hub:
condition: service_healthy
restart: unless-stopped
deploy:
resources:
limits:
memory: 1G
cpus: "2.0"
networks:
- backend
networks:
frontend:
backend:
volumes:
hub-data: # Start the stack
docker compose up -d
# View logs
docker compose logs -f hub
docker compose logs -f runtime
# Stop
docker compose down Network Separation
The hub is on both frontend and backend networks. The runtime is on backend only, meaning it is not directly accessible from outside the Docker network.
Production Checklist
| Item | Details |
|---|---|
| TLS Termination | Use a reverse proxy (nginx, Caddy) with valid certificates. Never expose the hub on plain HTTP in production. |
| JWT Secret | Set auth.jwt_secret to a strong random string (32+ characters). Do not use the default. |
| Runtime Token | Generate a unique runtime_token for each runtime. Rotate periodically. |
| Admin Password | Change the initial_admin password or remove the bootstrap config after first login. |
| Database | Use a persistent SQLite path (not :memory:). Back up the .db file regularly. |
| CORS Origins | Set server.allowed_origins to your actual domain(s). Do not use ["*"] in production. |
| Rate Limiting | Tune rate_limit.requests_per_second and rate_limit.burst for your expected load. |
| File Storage | Set server.file_storage_path to a persistent volume. Default is ./amurg-files. |
| Logging | Set logging.format to "json" for structured log aggregation. Set level to "warn" or "info". |
| Idle Timeout | Configure session.idle_timeout to prevent abandoned sessions from consuming resources (default: 30m). |
Network Architecture
Internet
|
[TLS Proxy]
(nginx/Caddy)
|
port 443/80
|
+----------------+----------------+
| Amurg Hub |
| (:8080 internal) |
| |
| /ws/client <-- UI clients |
| /ws/runtime <-- runtimes |
| /api/* <-- REST API |
| /* <-- UI static files |
+--------+-------+--------+--------+
| | |
+------+ +--+--+ +-+------+
|SQLite| |Files| |Audit |
| .db | |Store| |Log |
+------+ +-----+ +--------+
*** backend network ***
+------------------+ +------------------+
| Runtime A | | Runtime B |
| (outbound WS) | | (outbound WS) |
| | | |
| +------+ +-----+| | +------+ +-----+|
| |claude| | job || | |copilot| |codex||
| | code | | || | | CLI | | CLI||
| +------+ +-----+| | +------+ +-----+|
+------------------+ +------------------+ Outbound Only
Runtimes connect outbound to the hub. No inbound ports need to be opened on the runtime side, making deployment behind NAT/firewalls straightforward.
Reverse Proxy
A reverse proxy handles TLS termination, WebSocket upgrades, and optional rate limiting. Below are configurations for the two most common options.
nginx
server {
listen 443 ssl http2;
server_name amurg.example.com;
ssl_certificate /etc/letsencrypt/live/amurg.example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/amurg.example.com/privkey.pem;
# WebSocket support
location /ws/ {
proxy_pass http://127.0.0.1:8080;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_read_timeout 3600s;
proxy_send_timeout 3600s;
}
# API and static files
location / {
proxy_pass http://127.0.0.1:8080;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
} WebSocket Timeouts
Set proxy_read_timeout high (3600s+) for WebSocket connections. The default 60s will disconnect idle runtime connections.
Caddy
amurg.example.com {
reverse_proxy localhost:8080
} Caddy automatically provisions TLS certificates via Let's Encrypt and handles WebSocket upgrades without additional configuration.
Monitoring
Health checks
Use the built-in health endpoints for liveness and readiness probes:
| Endpoint | Purpose | Use For |
|---|---|---|
/healthz | Reports uptime, always 200 if process is running | Kubernetes liveness probe, Docker HEALTHCHECK |
/readyz | Pings the database; 503 if not ready | Kubernetes readiness probe, load balancer health |
Structured logging
The hub and runtime use Go's slog for structured JSON logging. Set logging.format: "json" for integration with log aggregation tools (ELK, Loki, CloudWatch).
{
"time": "2024-01-15T10:30:00Z",
"level": "INFO",
"msg": "session created",
"component": "router",
"session_id": "session-uuid",
"endpoint_id": "ep-uuid",
"user_id": "user-uuid"
} Audit log
The hub persists audit events to the database. Query them via the admin API endpoint GET /api/admin/audit. See the API Reference for the full list of event types and filtering options.
Kubernetes deployment
For Kubernetes, the hub runs as a Deployment with a PersistentVolumeClaim for the SQLite database and file storage. Runtimes can run as sidecar containers, DaemonSets, or standalone Deployments depending on your topology. Key considerations:
- Use
/healthzforlivenessProbeand/readyzforreadinessProbe - SQLite requires a ReadWriteOnce volume; the hub cannot scale horizontally with SQLite
- Runtimes are stateless and can scale freely
- Set resource requests/limits similar to the Docker Compose example (hub: 512MB/1CPU, runtime: 1GB/2CPU)