The Compose deploy is the right path for air-gapped, single-VM, and POC / evaluation deployments — anywhere standing up a Kubernetes cluster is overkill or impossible. The bundle is identical regardless of which path you choose: same application binaries, same database schema, same release-contract attestations. Migrating from Compose to EKS later does not require a data re-import.Documentation Index
Fetch the complete documentation index at: https://docs.trynebula.ai/llms.txt
Use this file to discover all available pages before exploring further.
Prereqs
- Linux host with
docker(24+) anddocker compose(v2.30+, ideally v2.40+ for healthcheck and profile semantics) - ~50 GB free disk (graph-engine WAL + compactor scratch + Postgres + RabbitMQ)
- 16 GB RAM minimum, 32 GB recommended for production
- At least one LLM provider key (OpenAI, Anthropic, or Azure OpenAI). For air-gapped deploys with no cloud LLM, use the
compose.vllm.yamloverlay (below)
enterprise/bootstrap.sh auto-detects which in-stack services to skip based on the *_HOST values in your .env.enterprise.
Install
What runs in the stack
| Service | Role | Externalizable? |
|---|---|---|
nebula | API + ingest pipeline | — |
nebula-worker | Hatchet worker pool | — |
graph-engine | Rust graph + vector store | — |
postgres | Nebula application database | Yes → RDS |
hatchet-postgres | Hatchet workflow database | Yes → RDS (same instance, separate DB) |
minio + minio-init | S3-compatible object storage | Yes → S3 |
hatchet-engine + hatchet-dashboard | Hatchet control plane | No |
hatchet-rabbitmq | Hatchet task queue | No |
Air-gapped: local LLM with vLLM
For deployments with no internet egress, stack thecompose.vllm.yaml overlay on top of compose.enterprise.yaml by passing --overlay to bootstrap.sh:
bootstrap.sh (rather than calling docker compose -f compose.enterprise.yaml -f compose.vllm.yaml up -d directly) is required so the in-stack postgres, hatchet-postgres, and minio services activate via COMPOSE_PROFILES — those services are profile-gated and won’t start under a raw docker compose up.
The overlay runs an in-stack OpenAI-compatible vLLM server and flips every Nebula service to NEBULA_CONFIG_NAME=onprem_local, which routes all completion and embedding calls to http://vllm:8000/v1. No OPENAI_API_KEY required.
GPU prereqs: NVIDIA Container Toolkit installed on the host; a GPU with at least 24 GB VRAM for the default Qwen3-VL model.
Upgrade
Stopping the stack
Troubleshooting
bootstrap.sh fails on 'NEBULA_POSTGRES_PASSWORD must be set'
bootstrap.sh fails on 'NEBULA_POSTGRES_PASSWORD must be set'
enterprise/generate-secrets.sh didn’t run, or env/.env.enterprise is missing the secrets section. Re-run ./enterprise/generate-secrets.sh ./env/.env.enterprise and try again. The script refuses to overwrite an existing file, so delete it first if you intend to regenerate (warning: this rotates every secret).catalog-bootstrap fails with 'pgvector extension not available'
catalog-bootstrap fails with 'pgvector extension not available'
The bundled
pgvector/pgvector:0.8.0-pg16 image always carries the extension, so this only happens if you’ve pointed at an external Postgres that doesn’t have pgvector enabled. On RDS, add vector to rds.extensions in the parameter group; on a vanilla Postgres, install the extension package on the server.graph-engine fails to start, logs show 'S3 access denied'
graph-engine fails to start, logs show 'S3 access denied'
For in-stack MinIO: check that
AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY in .env.enterprise match MINIO_ROOT_USER and MINIO_ROOT_PASSWORD. generate-secrets.sh populates all four to the same value; if you edited any of them manually, restore the match.Hatchet dashboard loads but shows 'connection refused'
Hatchet dashboard loads but shows 'connection refused'
The Hatchet engine takes 30-60s to fully start after migrations. Wait for
docker compose -f compose.enterprise.yaml logs -f hatchet-engine to show successfully booted, then refresh.