Docker Compose¶
Deploy Semaphore Chat with Docker Compose — from first launch to production.
Prerequisites¶
- Docker (v20+) and Docker Compose (v2+)
- A domain name pointing to your server (e.g.
semaphore.example.com)
Choose your setup¶
Three options depending on your existing infrastructure:
Includes Caddy as a reverse proxy with automatic HTTPS via Let's Encrypt, plus a bundled LiveKit server. Everything you need in one Compose file.
Bundles a LiveKit server for voice and video, but no reverse proxy — use this if you already run one (nginx, NPM, Traefik, etc.).
No reverse proxy, no LiveKit — use this if you already run both. Voice/video are disabled until you add your LiveKit credentials.
Mixing and matching
These are starting points. The core services (backend, frontend, postgres, redis) are the same across all three. The differences are whether Caddy and/or LiveKit are included. You can add Caddy from the first tab to either of the other setups, or remove LiveKit from the first tab if you bring your own.
Install¶
1. Create the Compose file¶
Download the IP watcher script (used by the "With Caddy" and "Batteries included" setups to restart LiveKit when your public IP changes):
mkdir -p scripts
curl -fsSL https://raw.githubusercontent.com/semaphore-chat/semaphore-chat/main/scripts/livekit-ip-watcher.sh -o scripts/livekit-ip-watcher.sh
Copy the Compose file for your chosen setup:
services:
caddy:
image: caddy:latest
restart: unless-stopped
ports:
- "443:443"
- "80:80"
environment:
HOST: ${HOST:?Set HOST in .env}
volumes:
- ./Caddyfile:/etc/caddy/Caddyfile:ro
- caddy_data:/data
- caddy_config:/config
depends_on:
- frontend
- backend
- livekit
backend:
image: ghcr.io/semaphore-chat/semaphore-backend:latest
restart: unless-stopped
environment:
DATABASE_URL: postgresql://semaphore:semaphore@postgres:5432/semaphore
REDIS_HOST: redis
JWT_SECRET: ${JWT_SECRET:?Set JWT_SECRET in .env}
JWT_REFRESH_SECRET: ${JWT_REFRESH_SECRET:?Set JWT_REFRESH_SECRET in .env}
LIVEKIT_URL: wss://lk.${HOST:?Set HOST in .env}
LIVEKIT_INTERNAL_URL: http://livekit:7880
LIVEKIT_API_KEY: ${LIVEKIT_API_KEY:?Set LIVEKIT_API_KEY in .env}
LIVEKIT_API_SECRET: ${LIVEKIT_API_SECRET:?Set LIVEKIT_API_SECRET in .env}
REPLAY_SEGMENTS_PATH: /app/storage/replay-segments
REPLAY_EGRESS_OUTPUT_PATH: /out
TRUST_PROXY: 1
volumes:
- uploads:/app/backend/uploads
- egress-data:/app/storage/replay-segments # shared with livekit-egress
depends_on:
volume-init:
condition: service_completed_successfully
postgres:
condition: service_healthy
redis:
condition: service_healthy
livekit:
condition: service_started
frontend:
image: ghcr.io/semaphore-chat/semaphore-frontend:latest
restart: unless-stopped
environment:
BACKEND_URL: http://backend:3000
depends_on:
- backend
livekit:
image: livekit/livekit-server:latest
restart: unless-stopped
environment:
LIVEKIT_CONFIG: |
port: 7880
rtc:
tcp_port: 7881
udp_port: 7882
use_external_ip: true
redis:
address: redis:6379
keys:
${LIVEKIT_API_KEY}: ${LIVEKIT_API_SECRET}
webhook:
api_key: ${LIVEKIT_API_KEY}
urls:
- http://backend:3000/api/livekit/webhook
ports:
- "7881:7881"
- "7882:7882/udp"
volume-init:
image: busybox
volumes:
- uploads:/uploads
- egress-data:/out
command: sh -c 'chown -R 1001:0 /uploads /out'
restart: "no"
livekit-egress:
image: livekit/egress:latest
restart: unless-stopped
cap_add:
- SYS_ADMIN
environment:
EGRESS_CONFIG_BODY: |
api_key: ${LIVEKIT_API_KEY}
api_secret: ${LIVEKIT_API_SECRET}
ws_url: ws://livekit:7880
redis:
address: redis:6379
volumes:
- egress-data:/out
depends_on:
volume-init:
condition: service_completed_successfully
livekit:
condition: service_started
redis:
condition: service_healthy
livekit-ip-watcher:
image: alpine:latest
restart: unless-stopped
command: sh /scripts/livekit-ip-watcher.sh
environment:
CHECK_INTERVAL: 300
LIVEKIT_CONTAINER: livekit
volumes:
- ./scripts/livekit-ip-watcher.sh:/scripts/livekit-ip-watcher.sh:ro
- /var/run/docker.sock:/var/run/docker.sock
depends_on:
livekit:
condition: service_started
postgres:
image: postgres:17-alpine
restart: unless-stopped
environment:
POSTGRES_USER: semaphore
POSTGRES_PASSWORD: semaphore
POSTGRES_DB: semaphore
healthcheck:
test: ["CMD-SHELL", "pg_isready -U semaphore"]
interval: 5s
timeout: 5s
retries: 10
volumes:
- pgdata:/var/lib/postgresql/data
redis:
image: redis:latest
restart: unless-stopped
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 5s
timeout: 10s
retries: 10
volumes:
- redisdata:/data
volumes:
pgdata:
redisdata:
uploads:
egress-data:
caddy_data:
caddy_config:
You also need a Caddyfile next to your docker-compose.yml:
{$HOST:?Set HOST in .env} {
handle /api/* {
reverse_proxy backend:3000
}
handle /socket.io/* {
reverse_proxy backend:3000
}
handle {
reverse_proxy frontend:5173
}
}
lk.{$HOST} {
reverse_proxy livekit:7880
}
What's in this setup:
- Caddy handles TLS automatically via Let's Encrypt — routes
/api/*and/socket.io/*to the backend, everything else to the frontend, andlk.subdomain to LiveKit signaling - LiveKit uses
udp_portto multiplex all WebRTC UDP traffic through a single port (no user cap, no port range to forward) use_external_ip: true— LiveKit discovers its public IP via STUN- IP watcher — monitors your public IP and restarts LiveKit if it changes (important for dynamic IPs)
LIVEKIT_INTERNAL_URL— the backend uses this Docker-internal address for server-to-server API calls, whileLIVEKIT_URLis the browser-facing address returned to clients- The LiveKit API key/secret are shared between the backend and LiveKit — the backend uses them to generate tokens and LiveKit uses them to sign webhook payloads
services:
backend:
image: ghcr.io/semaphore-chat/semaphore-backend:latest
restart: unless-stopped
ports:
- "3000:3000"
environment:
DATABASE_URL: postgresql://semaphore:semaphore@postgres:5432/semaphore
REDIS_HOST: redis
JWT_SECRET: ${JWT_SECRET:?Set JWT_SECRET in .env}
JWT_REFRESH_SECRET: ${JWT_REFRESH_SECRET:?Set JWT_REFRESH_SECRET in .env}
LIVEKIT_URL: wss://lk.${HOST:?Set HOST in .env}
LIVEKIT_INTERNAL_URL: http://livekit:7880
LIVEKIT_API_KEY: ${LIVEKIT_API_KEY:?Set LIVEKIT_API_KEY in .env}
LIVEKIT_API_SECRET: ${LIVEKIT_API_SECRET:?Set LIVEKIT_API_SECRET in .env}
REPLAY_SEGMENTS_PATH: /app/storage/replay-segments
REPLAY_EGRESS_OUTPUT_PATH: /out
volumes:
- uploads:/app/backend/uploads
- egress-data:/app/storage/replay-segments # shared with livekit-egress
depends_on:
volume-init:
condition: service_completed_successfully
postgres:
condition: service_healthy
redis:
condition: service_healthy
livekit:
condition: service_started
frontend:
image: ghcr.io/semaphore-chat/semaphore-frontend:latest
restart: unless-stopped
ports:
- "5173:5173"
environment:
BACKEND_URL: http://backend:3000
depends_on:
- backend
livekit:
image: livekit/livekit-server:latest
restart: unless-stopped
environment:
LIVEKIT_CONFIG: |
port: 7880
rtc:
tcp_port: 7881
udp_port: 7882
use_external_ip: true
redis:
address: redis:6379
keys:
${LIVEKIT_API_KEY}: ${LIVEKIT_API_SECRET}
webhook:
api_key: ${LIVEKIT_API_KEY}
urls:
- http://backend:3000/api/livekit/webhook
ports:
- "7880:7880"
- "7881:7881"
- "7882:7882/udp"
volume-init:
image: busybox
volumes:
- uploads:/uploads
- egress-data:/out
command: sh -c 'chown -R 1001:0 /uploads /out'
restart: "no"
livekit-egress:
image: livekit/egress:latest
restart: unless-stopped
cap_add:
- SYS_ADMIN
environment:
EGRESS_CONFIG_BODY: |
api_key: ${LIVEKIT_API_KEY}
api_secret: ${LIVEKIT_API_SECRET}
ws_url: ws://livekit:7880
redis:
address: redis:6379
volumes:
- egress-data:/out
depends_on:
volume-init:
condition: service_completed_successfully
livekit:
condition: service_started
redis:
condition: service_healthy
livekit-ip-watcher:
image: alpine:latest
restart: unless-stopped
command: sh /scripts/livekit-ip-watcher.sh
environment:
CHECK_INTERVAL: 300
LIVEKIT_CONTAINER: livekit
volumes:
- ./scripts/livekit-ip-watcher.sh:/scripts/livekit-ip-watcher.sh:ro
- /var/run/docker.sock:/var/run/docker.sock
depends_on:
livekit:
condition: service_started
postgres:
image: postgres:17-alpine
restart: unless-stopped
environment:
POSTGRES_USER: semaphore
POSTGRES_PASSWORD: semaphore
POSTGRES_DB: semaphore
healthcheck:
test: ["CMD-SHELL", "pg_isready -U semaphore"]
interval: 5s
timeout: 5s
retries: 10
volumes:
- pgdata:/var/lib/postgresql/data
redis:
image: redis:latest
restart: unless-stopped
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 5s
timeout: 10s
retries: 10
volumes:
- redisdata:/data
volumes:
pgdata:
redisdata:
uploads:
egress-data:
What's in the LiveKit config:
udp_port: 7882— multiplexes all WebRTC UDP traffic through a single port (no user cap, no port range to forward)use_external_ip: true— LiveKit discovers its public IP via STUNkeys— the API key/secret pair must match what the backend uses, and the secret must be at least 32 characters or LiveKit will refuse to startwebhook— pre-configured to send voice presence events back to the backend (requiresapi_keyto sign payloads)LIVEKIT_INTERNAL_URL— the backend uses this Docker-internal address for server-to-server API calls, whileLIVEKIT_URLis the browser-facing address returned to clients- IP watcher — monitors your public IP and restarts LiveKit if it changes (important for dynamic IPs)
You need to configure your reverse proxy to route traffic to these services. See Reverse proxy and HTTPS below.
services:
backend:
image: ghcr.io/semaphore-chat/semaphore-backend:latest
restart: unless-stopped
ports:
- "3000:3000"
environment:
DATABASE_URL: postgresql://semaphore:semaphore@postgres:5432/semaphore
REDIS_HOST: redis
JWT_SECRET: ${JWT_SECRET:?Set JWT_SECRET in .env}
JWT_REFRESH_SECRET: ${JWT_REFRESH_SECRET:?Set JWT_REFRESH_SECRET in .env}
# Uncomment and fill in to enable voice/video:
# LIVEKIT_URL: ${LIVEKIT_URL:-}
# LIVEKIT_API_KEY: ${LIVEKIT_API_KEY:-}
# LIVEKIT_API_SECRET: ${LIVEKIT_API_SECRET:-}
volumes:
- uploads:/app/backend/uploads
depends_on:
volume-init:
condition: service_completed_successfully
postgres:
condition: service_healthy
redis:
condition: service_healthy
volume-init:
image: busybox
volumes:
- uploads:/uploads
command: chown -R 1001:0 /uploads
restart: "no"
frontend:
image: ghcr.io/semaphore-chat/semaphore-frontend:latest
restart: unless-stopped
ports:
- "5173:5173"
environment:
BACKEND_URL: http://backend:3000
depends_on:
- backend
postgres:
image: postgres:17-alpine
restart: unless-stopped
environment:
POSTGRES_USER: semaphore
POSTGRES_PASSWORD: semaphore
POSTGRES_DB: semaphore
healthcheck:
test: ["CMD-SHELL", "pg_isready -U semaphore"]
interval: 5s
timeout: 5s
retries: 10
volumes:
- pgdata:/var/lib/postgresql/data
redis:
image: redis:latest
restart: unless-stopped
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 5s
timeout: 10s
retries: 10
volumes:
- redisdata:/data
volumes:
pgdata:
redisdata:
uploads:
To enable voice/video later, uncomment the LIVEKIT_* lines and add your credentials to .env. See Connecting your LiveKit server below.
2. Configure environment¶
Create a .env file next to your docker-compose.yml:
Your domain needs two DNS records pointing to your server — HOST and lk.HOST (e.g. semaphore.example.com and lk.semaphore.example.com), or a wildcard *.semaphore.example.com.
Your domain needs two DNS records pointing to your server — HOST and lk.HOST (e.g. semaphore.example.com and lk.semaphore.example.com), or a wildcard *.semaphore.example.com.
Generate strong secrets with:
Security
Never use the default secrets in production. Generate unique values for each secret.
See the Configuration page for the full environment variable reference.
3. Start all services¶
Caddy will automatically obtain TLS certificates from Let's Encrypt for your domain.
| Service | Description | URL |
|---|---|---|
| Caddy | Reverse proxy with automatic HTTPS | https://your-domain.com |
| Frontend | React app (behind Caddy) | — |
| Backend | NestJS API (behind Caddy) | — |
| LiveKit | Voice/video media server | wss://lk.your-domain.com |
| PostgreSQL | Database | internal only |
| Redis | Cache and pub/sub | internal only |
Port forwarding — forward these on your router:
| Port | Protocol | Service |
|---|---|---|
| 443 | TCP | HTTPS (frontend, backend, LiveKit signaling) |
| 80 | TCP | HTTP → HTTPS redirect |
| 7881 | TCP | LiveKit WebRTC (TCP) |
| 7882 | UDP | LiveKit WebRTC (UDP) |
| Service | Description | URL |
|---|---|---|
| Frontend | Nginx serving the React app | http://localhost:5173 |
| Backend | NestJS API | http://localhost:3000 |
| LiveKit | Voice/video media server | wss://lk.your-domain.com |
| PostgreSQL | Database | internal only |
| Redis | Cache and pub/sub | internal only |
Port forwarding — forward these on your router:
| Port | Protocol | Service |
|---|---|---|
| 443 | TCP | Your reverse proxy |
| 5173 | TCP | Frontend (or proxy to it) |
| 3000 | TCP | Backend API (or proxy to it) |
| 7880 | TCP | LiveKit signaling (or proxy to it) |
| 7881 | TCP | LiveKit WebRTC (TCP) |
| 7882 | UDP | LiveKit WebRTC (UDP) |
The frontend's built-in nginx already proxies /api and /socket.io to the backend internally. Your reverse proxy only needs to route by domain:
| Domain | Destination | Notes |
|---|---|---|
your-domain.com | localhost:5173 | Frontend (handles /api and /socket.io internally) |
lk.your-domain.com | localhost:7880 | LiveKit signaling — ensure WebSocket upgrade headers are forwarded |
| Service | Description | URL |
|---|---|---|
| Frontend | Nginx serving the React app | http://localhost:5173 |
| Backend | NestJS API | http://localhost:3000 |
| PostgreSQL | Database | internal only |
| Redis | Cache and pub/sub | internal only |
4. Open Semaphore Chat¶
Visit your domain (or http://localhost:5173 without Caddy) in your browser. You're ready to create your first account.
Stopping and restarting¶
# Stop all services
docker compose down
# Start again (data is persisted in Docker volumes)
docker compose up -d
# Full reset (removes all data)
docker compose down -v
Connecting your LiveKit server¶
If you chose the "Bring your own LiveKit" setup, follow these steps to enable voice and video.
LiveKit Cloud¶
- Sign up at LiveKit Cloud and create a project
- Add credentials to your
.env: - Uncomment the
LIVEKIT_*lines indocker-compose.yml - Configure webhooks in the LiveKit Cloud dashboard — set the URL to
https://your-domain.com/api/livekit/webhook - Restart:
docker compose down && docker compose up -d
Replay capture not yet supported with LiveKit Cloud
The replay/clip capture feature requires LiveKit egress and Semaphore Chat to share a filesystem for HLS segment access. LiveKit Cloud writes egress output to cloud storage (S3/GCS/Azure Blob), which Semaphore Chat can't read from yet. Voice and video calls work normally — only replay capture is affected. See #227 for progress on cloud storage support.
Self-hosted LiveKit¶
- Add credentials to your
.env: - Uncomment the
LIVEKIT_*lines indocker-compose.yml - Configure webhooks on your LiveKit server to send events to
https://your-domain.com/api/livekit/webhook - Restart:
docker compose down && docker compose up -d
When browser and backend URLs differ
If the backend can't reach LiveKit at the same URL the browser uses (e.g., different networks), set LIVEKIT_INTERNAL_URL to the backend-reachable address. The backend uses this for server-to-server API calls while LIVEKIT_URL is returned to browsers. See the Configuration page for details.
Going to production¶
Architecture overview¶
graph LR
Client[Browser] --> Proxy[Reverse Proxy<br/>nginx / Caddy]
Proxy -->|/| Frontend[Frontend<br/>React + Nginx<br/>:5173]
Proxy -->|/api, /socket.io| Backend[Backend<br/>NestJS<br/>:3000]
Backend --> PostgreSQL[(PostgreSQL<br/>:5432)]
Backend --> Redis[(Redis<br/>:6379)]
Backend --> LiveKit[LiveKit Server] Change all default secrets
Generate unique random values for every secret. Never commit .env files to version control.
Reverse proxy and HTTPS¶
If you chose the "With Caddy" setup, this is already handled. For the other setups, place a reverse proxy in front of Semaphore Chat to handle TLS termination:
- Proxy
your-domain.comto the frontend (port 5173) — the frontend's nginx handles/apiand/socket.iorouting to the backend internally - Proxy
lk.your-domain.comto LiveKit signaling (port 7880) - Ensure WebSocket upgrade headers are forwarded for LiveKit
Data persistence¶
Docker Compose uses named volumes for PostgreSQL and Redis data. These persist across container restarts.
- Backup PostgreSQL regularly:
docker compose exec postgres pg_dump -U semaphore semaphore > backup.sql - Monitor disk usage — PostgreSQL and uploads can grow over time
Resource limits¶
For production, consider adding resource limits in a docker-compose.override.yml:
services:
backend:
deploy:
resources:
limits:
memory: 1G
frontend:
deploy:
resources:
limits:
memory: 512M
Replay capture (LiveKit egress)¶
The replay/clip capture feature requires LiveKit egress to write HLS segments to a location that the Semaphore Chat backend can also read from. Both services need access to the same storage path.
Mount a shared volume into both the LiveKit egress container and the Semaphore Chat backend:
services:
backend:
volumes:
- egress-data:/out
environment:
REPLAY_EGRESS_OUTPUT_PATH: /out
REPLAY_SEGMENTS_PATH: /out
# Your LiveKit egress service must also mount egress-data:/out
volumes:
egress-data:
LiveKit Cloud
LiveKit Cloud writes egress output to cloud storage (S3/GCS/Azure Blob), which Semaphore Chat can't read from yet. Replay capture is not available with LiveKit Cloud until cloud storage support is added. See #227 for progress.
Dynamic IP support¶
If your server has a dynamic public IP (common with residential ISPs), voice and video will break when the IP changes. LiveKit resolves its external IP once at startup via STUN and bakes it into WebRTC ICE candidates — there is no periodic re-resolution.
The "With Caddy" and "Batteries included" setups already include the livekit-ip-watcher service, which monitors your public IP and restarts LiveKit when it changes. No additional setup is needed.
If you're using the "Bring your own LiveKit" setup with a self-hosted LiveKit server, you'll need to handle IP changes on your LiveKit server separately.
Docker socket security
The IP watcher mounts /var/run/docker.sock to restart the LiveKit container. This grants it full Docker API access. Only use this on hosts where you trust all running containers.
Updating¶
The database schema is automatically updated on container startup.
Troubleshooting¶
Database connection errors¶
If containers fail to connect to PostgreSQL, ensure the postgres service is healthy:
"Port already in use"¶
Check what's using the port and stop it:
LiveKit exits immediately¶
Check the logs:
Common causes:
- "secret is too short" — The API secret in the
LIVEKIT_CONFIGkeys section and in the backend'sLIVEKIT_API_SECRETmust be at least 32 characters. Both values must match. - "api_key is required to use webhooks" — The
webhooksection inLIVEKIT_CONFIGneeds anapi_keyfield matching one of the keys defined in thekeyssection.
Containers won't start¶
Try pulling fresh images and recreating:
Next steps¶
- Configuration — Full environment variable reference
- First Run — Create your first user, community, and channels
- Kubernetes — Deploy to a Kubernetes cluster