Docker Deployment
Deploy Stevora with Docker on your infrastructure
Docker Deployment
Stevora ships with a production-ready Docker setup that runs two containers -- an API server (Fastify) and a background worker (BullMQ) -- backed by PostgreSQL and Redis. This guide covers building the image, configuring docker-compose, deploying to a VPS or EC2 instance, and verifying health.
Architecture
┌──────────────────────┐
│ Load Balancer │
│ (Nginx / ALB / CF) │
└──────────┬───────────┘
│
┌────────────────┼────────────────┐
│ │ │
┌────────▼──────┐ ┌─────▼──────┐ ┌──────▼──────┐
│ stevora-api │ │ stevora- │ │ stevora- │
│ (port 3000) │ │ worker │ │ worker-2 │
│ Fastify REST │ │ BullMQ │ │ (optional) │
└───────┬────────┘ └─────┬──────┘ └──────┬──────┘
│ │ │
┌───────▼─────────────────▼─────────────────▼──────┐
│ PostgreSQL │
│ (workflow state, definitions) │
└──────────────────────────────────────────────────┘
┌──────────────────────────────────────────────────┐
│ Redis │
│ (BullMQ job queues) │
└──────────────────────────────────────────────────┘The API server handles REST requests -- creating workflow definitions, starting runs, listing approvals, and serving health checks. The worker processes BullMQ jobs -- executing steps, calling LLMs, managing retries, and delivering webhooks. Both containers share the same Docker image but run different entrypoints.
Prerequisites
- Docker 20.10+ and Docker Compose v2
- A PostgreSQL 15+ instance (self-hosted, RDS, Neon, or Supabase)
- A Redis 7+ instance (self-hosted, ElastiCache, or Upstash)
- At least 1 GB of RAM for the API + worker containers
Building the Docker Image
Stevora uses a multi-stage Dockerfile based on node:20-alpine. The build compiles TypeScript with tsup and generates the Prisma client.
FROM node:20-alpine AS base
WORKDIR /app
RUN apk add --no-cache wget
FROM base AS deps
COPY package.json ./
RUN npm install --ignore-scripts
COPY prisma ./prisma
RUN npx prisma generate
FROM base AS builder
COPY --from=deps /app/node_modules ./node_modules
COPY . .
RUN npx tsup src/server.ts src/worker.ts --format esm
FROM base AS runner
ENV NODE_ENV=production
COPY --from=deps /app/node_modules ./node_modules
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/prisma ./prisma
COPY package.json ./
EXPOSE 3000
CMD ["node", "dist/server.js"]Build the image:
docker build -t stevora:latest -f docker/Dockerfile .The final image is around 200 MB and includes only the compiled JavaScript, Prisma client, and production dependencies.
Docker Compose Setup
The production compose file runs the API server and worker as separate services from the same image. Each uses a different entrypoint command.
services:
api:
build:
context: ..
dockerfile: docker/Dockerfile
container_name: stevora-api
command: ["node", "dist/server.js"]
restart: always
ports:
- "3000:3000"
env_file:
- ../.env.production
environment:
NODE_ENV: production
PORT: 3000
HOST: 0.0.0.0
healthcheck:
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:3000/health"]
interval: 30s
timeout: 5s
retries: 3
start_period: 10s
worker:
build:
context: ..
dockerfile: docker/Dockerfile
container_name: stevora-worker
command: ["node", "dist/worker.js"]
restart: always
env_file:
- ../.env.production
environment:
NODE_ENV: production
depends_on:
api:
condition: service_healthyThe worker depends on api with condition: service_healthy. This ensures the API server is up and passing health checks before the worker starts processing jobs.
Adding PostgreSQL and Redis
If you want to run the full stack locally or on a single VPS (without managed services), add PostgreSQL and Redis to the compose file:
services:
postgres:
image: postgres:16-alpine
container_name: stevora-postgres
restart: always
ports:
- "5432:5432"
environment:
POSTGRES_USER: stevora
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:-stevora_secure_pw}
POSTGRES_DB: stevora
volumes:
- pgdata:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U stevora"]
interval: 10s
timeout: 5s
retries: 5
redis:
image: redis:7-alpine
container_name: stevora-redis
restart: always
ports:
- "6379:6379"
command: ["redis-server", "--requirepass", "${REDIS_PASSWORD:-stevora_redis_pw}", "--maxmemory", "256mb", "--maxmemory-policy", "noeviction"]
volumes:
- redisdata:/data
healthcheck:
test: ["CMD", "redis-cli", "-a", "${REDIS_PASSWORD:-stevora_redis_pw}", "ping"]
interval: 10s
timeout: 5s
retries: 5
api:
build:
context: ..
dockerfile: docker/Dockerfile
container_name: stevora-api
command: ["node", "dist/server.js"]
restart: always
ports:
- "3000:3000"
environment:
NODE_ENV: production
PORT: 3000
HOST: 0.0.0.0
DATABASE_URL: postgresql://stevora:${POSTGRES_PASSWORD:-stevora_secure_pw}@postgres:5432/stevora?schema=public
REDIS_HOST: redis
REDIS_PORT: 6379
REDIS_PASSWORD: ${REDIS_PASSWORD:-stevora_redis_pw}
healthcheck:
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:3000/health"]
interval: 30s
timeout: 5s
retries: 3
start_period: 10s
depends_on:
postgres:
condition: service_healthy
redis:
condition: service_healthy
worker:
build:
context: ..
dockerfile: docker/Dockerfile
container_name: stevora-worker
command: ["node", "dist/worker.js"]
restart: always
environment:
NODE_ENV: production
DATABASE_URL: postgresql://stevora:${POSTGRES_PASSWORD:-stevora_secure_pw}@postgres:5432/stevora?schema=public
REDIS_HOST: redis
REDIS_PORT: 6379
REDIS_PASSWORD: ${REDIS_PASSWORD:-stevora_redis_pw}
depends_on:
api:
condition: service_healthy
volumes:
pgdata:
redisdata:The noeviction policy on Redis is important. BullMQ stores job data in Redis, and evicting keys would cause jobs to silently disappear. Always use noeviction for queue workloads.
Deploying to a VPS or EC2
Provision a server
Launch an EC2 instance (or any VPS) with at least 2 vCPUs and 2 GB RAM. Ubuntu 22.04 or Amazon Linux 2023 are recommended. Open ports 22 (SSH), 80 (HTTP), and 443 (HTTPS) in your security group.
# SSH into your server
ssh -i your-key.pem ubuntu@your-server-ipInstall Docker
# Ubuntu
curl -fsSL https://get.docker.com | sh
sudo usermod -aG docker $USER
# Log out and back in, then verify
docker --version
docker compose versionClone and configure
git clone https://github.com/your-org/stevora.git
cd stevora
# Create your production environment file
cp .env.production.example .env.productionEdit .env.production with your actual values. See the Environment Variables reference for every option.
NODE_ENV=production
PORT=3000
HOST=0.0.0.0
LOG_LEVEL=info
# PostgreSQL -- use your managed database URL or the docker-internal address
DATABASE_URL=postgresql://stevora:your_password@postgres:5432/stevora?schema=public
# Redis -- use your managed Redis or the docker-internal address
REDIS_HOST=redis
REDIS_PORT=6379
REDIS_PASSWORD=your_redis_password
# LLM Providers
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-ant-...
# Security
ADMIN_TOKEN=your_admin_token_hereRun database migrations
Before starting the services, apply the Prisma schema to your database:
# If using the full-stack compose (with built-in PostgreSQL), start just the database first
docker compose -f docker/docker-compose.full.yml up -d postgres
sleep 5
# Run migrations
docker compose -f docker/docker-compose.full.yml run --rm api npx prisma migrate deploy
# Seed the database (creates a demo workspace and API key)
docker compose -f docker/docker-compose.full.yml run --rm api npx prisma db seedStart all services
# Full stack (PostgreSQL + Redis + API + Worker)
docker compose -f docker/docker-compose.full.yml up -d --build
# Or, if using managed PostgreSQL and Redis
docker compose -f docker/docker-compose.prod.yml up -d --buildVerify everything is running:
docker compose -f docker/docker-compose.full.yml psExpected output:
NAME STATUS PORTS
stevora-api Up 30 seconds (healthy) 0.0.0.0:3000->3000/tcp
stevora-worker Up 25 seconds
stevora-postgres Up 35 seconds (healthy) 0.0.0.0:5432->5432/tcp
stevora-redis Up 35 seconds (healthy) 0.0.0.0:6379->6379/tcpVerify the deployment
# Health check
curl http://localhost:3000/health
# Expected: {"status":"ok"}Test with a workflow run:
# Replace with your API key from the seed output
curl -X POST http://localhost:3000/v1/workflow-runs \
-H "x-api-key: stv_k1_your_key" \
-H "Content-Type: application/json" \
-d '{
"definitionId": "YOUR_DEF_ID",
"input": {"prospectName": "Test", "company": "Acme", "email": "test@acme.com"}
}'Health Checks
The API server exposes a GET /health endpoint that returns {"status":"ok"} with a 200 status code when the server is ready to accept requests.
The Docker health check is configured to call this endpoint every 30 seconds:
healthcheck:
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:3000/health"]
interval: 30s
timeout: 5s
retries: 3
start_period: 10s| Parameter | Value | Description |
|---|---|---|
interval | 30s | Time between health checks |
timeout | 5s | Max time to wait for a response |
retries | 3 | Failures before marking unhealthy |
start_period | 10s | Grace period after container starts |
If you are running behind a load balancer (ALB, Nginx, Caddy), point its health check at the same /health endpoint.
Putting Nginx in Front
For production, place a reverse proxy in front of the API to handle TLS termination, rate limiting, and request buffering.
server {
listen 80;
server_name stevora.yourdomain.com;
location / {
proxy_pass http://127.0.0.1:3000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# Support long-polling and streaming
proxy_read_timeout 300s;
proxy_buffering off;
}
}Add TLS with Let's Encrypt:
sudo apt install certbot python3-certbot-nginx
sudo certbot --nginx -d stevora.yourdomain.comScaling Workers
The worker is stateless -- you can run multiple instances to increase throughput. Each worker picks jobs from the same BullMQ queue in Redis, so work is automatically distributed.
# Add more worker replicas in docker-compose
worker:
build:
context: ..
dockerfile: docker/Dockerfile
command: ["node", "dist/worker.js"]
restart: always
deploy:
replicas: 3
env_file:
- ../.env.production
environment:
NODE_ENV: productionOr run workers on separate machines, all pointing to the same PostgreSQL and Redis instances.
Viewing Logs
# All services
docker compose -f docker/docker-compose.full.yml logs -f
# API server only
docker compose -f docker/docker-compose.full.yml logs -f api
# Worker only
docker compose -f docker/docker-compose.full.yml logs -f worker
# Last 100 lines
docker compose -f docker/docker-compose.full.yml logs --tail=100 workerStevora uses structured JSON logging via Pino. Set LOG_LEVEL in your environment to control verbosity (fatal, error, warn, info, debug, trace).
Updating
To deploy a new version:
cd stevora
git pull origin main
# Rebuild and restart (zero-downtime with health checks)
docker compose -f docker/docker-compose.full.yml up -d --build
# Run any new migrations
docker compose -f docker/docker-compose.full.yml run --rm api npx prisma migrate deployThe restart: always policy ensures containers come back up after a reboot. The worker's depends_on health check ensures it waits for the API to be ready before starting.
Next Steps
- Environment Variables -- Full reference for all configuration options
- API Reference -- REST API documentation
- Examples -- Run the AI SDR workflow on your deployment