By default the Enterprise stack runs Postgres (Nebula app + Hatchet) and MinIO inside the deploy boundary. For any production AWS deployment we recommend backing these onto RDS Postgres and real S3 instead — better durability, point-in-time recovery, IAM-driven credentials, and operational ownership by AWS. This page covers the AWS-side setup and the config knobs. The same managed-resource path works on both the Compose and EKS deploys.Documentation Index
Fetch the complete documentation index at: https://docs.trynebula.ai/llms.txt
Use this file to discover all available pages before exploring further.
RDS Postgres
Instance setup
- Engine: PostgreSQL 16 (Aurora PostgreSQL 16 also works)
- Databases: pre-create the two databases (
nebulaandhatchet) as the master user before bootstrap. The bundledcreate-hatchet-db.shrunscreatedbasHATCHET_POSTGRES_USERagainst managed Postgres only as an idempotent no-op; it does NOT assumeCREATEDBon the application role. - Parameter group:
rds.extensionsmust includevector. The database initialization path must runCREATE EXTENSION IF NOT EXISTS vectorbefore API traffic reaches the cluster. - Network: private subnet in the same VPC as the compose host or EKS cluster. The host’s / cluster’s security group must be allowed inbound on
:5432from the application security group. - Storage: gp3, 100 GB minimum to start. Enable autoscaling up to ~1 TB; Nebula’s working set scales linearly with the number of collections + total ingested document volume.
- Backups: RDS automatic backups, 7-day retention minimum. PITR is enabled by default.
Database / user provisioning
Two logical databases on the same RDS instance — separate Postgres users for blast-radius isolation:hatchet-create-db one-shot expects to be able to create its database on first boot; if you’ve already created it above, the one-shot is a no-op (it checks for existence).
Compose: wire up via .env.enterprise
Append to env/.env.enterprise:
bootstrap.sh auto-detects the override and skips the in-stack postgres + hatchet-postgres containers via compose profiles. No other changes required.
EKS: wire up via Helm values
In youryour-values.yaml (copied from helm/examples/eks/values.yaml):
credentialsSecret references a Kubernetes Secret with username and password keys (those exact lowercase key names — the chart reads them via secretKeyRef.key: username / .key: password). Create them via ESO (sync from AWS Secrets Manager) or kubectl create secret generic <name> --from-literal=username=<u> --from-literal=password=<p>.
S3 (object storage)
Bucket setup
- Region: same as the compose host / EKS cluster (cross-region adds latency to every snapshot read)
- Versioning: enabled (recommended; protects against accidental deletes)
- Encryption: SSE-S3 or SSE-KMS
- Public access: blocked at the account level
- Lifecycle: optional — Nebula doesn’t expire its own objects, but you can set a policy on the
incomplete-multipart-uploadsprefix to clean up failed ingests after 7 days
IAM policy
The principal accessing S3 (instance profile / ECS task role / EKS IRSA role / IAM user) needs:kms:Encrypt, kms:Decrypt, kms:GenerateDataKey on the KMS key ARN.
Compose: wire up via .env.enterprise
Append to env/.env.enterprise:
= with no value (e.g. NEBULA_S3_ENDPOINT_URL=) is intentional — it tells the AWS SDK to resolve the regional endpoint instead of falling back to the in-stack MinIO URL, and tells boto3 to use the default credential chain (instance profile, ECS task role) instead of static keys.
bootstrap.sh auto-detects NEBULA_USE_EXTERNAL_S3=1 and skips the in-stack minio + minio-init containers.
EKS: wire up via Helm values
The example values file athelm/examples/eks/values.yaml has this pre-wired:
Sanity checks
After re-bootstrapping with managed resources, verify:| Check | Command | Expected |
|---|---|---|
| App reaches RDS | curl -fsS http://localhost:7272/v1/health | Health endpoint returns OK |
| Migrations applied | Alembic head matches bundle | Tables collections, memories, entities exist in the nebula DB |
| pgvector loaded | \dx vector in psql | shows vector | 0.x.x |
| S3 writes succeed | Ingest a small doc | New objects appear under <your-bucket>/nebula-graphs/ |
| Hatchet workflows run | http://localhost:7274 | Dashboard shows enqueued tasks |
Migrating an existing in-stack deploy to managed resources
If you’ve been running with in-stack Postgres + MinIO and want to move to RDS + S3 without losing data:- Dump in-stack Postgres —
docker exec -t <postgres-container> pg_dumpall -U postgres > nebula-dump.sql - Restore into RDS —
psql -h <rds-endpoint> -U postgres < nebula-dump.sql - Copy MinIO contents to S3 —
aws s3 sync s3://nebula-files/ s3://<your-bucket>/ --source-region us-east-1 --region us-east-1(with appropriate MinIO/S3 credentials) - Update
.env.enterprisewith the managed-resource overrides above - Restart:
./enterprise/bootstrap.sh— the script will pick up the overrides and skip the in-stack services