Docker Services¶
Reference for container configuration and management.
Docker assets live in docker/. The default .env.example sets
COMPOSE_FILE=docker/docker-compose.yaml:docker/docker-compose.local.yaml,
so local docker compose commands can still be run from the repository root
after the setup wizard generates .env.
Services overview¶
| Service | Image | Port | Purpose |
|---|---|---|---|
postgres |
postgres:18.3-alpine |
${POSTGRES_PORT:-5432} |
Database |
pgbouncer |
edoburu/pgbouncer:v1.25.1-p0 |
internal | PostgreSQL connection pool |
redis |
redis:8.6.2 |
${REDIS_PORT:-6379} |
Cache, throttling, Celery broker/result backend |
minio |
minio/minio:RELEASE.2025-09-07T16-13-09Z |
${MINIO_API_PORT:-9000}, ${MINIO_CONSOLE_PORT:-9001} |
Object storage (S3-compatible) |
PostgreSQL¶
Configuration¶
postgres:
image: postgres:18.3-alpine
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: example-postgres-password
POSTGRES_DB: postgres
ports:
- "${POSTGRES_PORT:-5432}:5432"
volumes:
- postgres_data:/var/lib/postgresql
Connection string¶
DATABASE_URL=postgres://postgres:example-postgres-password@localhost:${POSTGRES_PORT:-5432}/postgres
Containers connect through PgBouncer by default:
Commands¶
# Start
docker compose up -d postgres
# View logs
docker compose logs -f postgres
# Connect with psql
docker compose exec postgres sh -c 'psql -U "$POSTGRES_USER" -d "$POSTGRES_DB"'
# Stop
docker compose stop postgres
Redis¶
Configuration¶
redis:
image: redis:8.6.2
command:
- redis-server
- --requirepass
- ${REDIS_PASSWORD}
ports:
- "${REDIS_PORT:-6379}:6379"
volumes:
- redis_data:/data
healthcheck:
test: ["CMD-SHELL", "REDISCLI_AUTH=\"$${REDIS_PASSWORD}\" redis-cli ping | grep PONG"]
Connection string¶
Commands¶
# Start
docker compose up -d redis
# View logs
docker compose logs -f redis
# Connect with redis-cli
docker compose exec redis sh -c 'REDISCLI_AUTH="$REDIS_PASSWORD" redis-cli'
# Monitor commands
docker compose exec redis sh -c 'REDISCLI_AUTH="$REDIS_PASSWORD" redis-cli MONITOR'
# Stop
docker compose stop redis
MinIO (S3 storage)¶
Configuration¶
minio:
image: minio/minio:RELEASE.2025-09-07T16-13-09Z
command: server /data --console-address ":9001"
environment:
MINIO_ROOT_USER: ${MINIO_ROOT_USER}
MINIO_ROOT_PASSWORD: ${MINIO_ROOT_PASSWORD}
ports:
- "${MINIO_API_PORT:-9000}:9000" # API
- "${MINIO_CONSOLE_PORT:-9001}:9001" # Console
volumes:
- minio_data:/data
Environment variables¶
MINIO_API_PORT=9000
MINIO_CONSOLE_PORT=9001
MINIO_ROOT_USER=example-minio-access-key-id
MINIO_ROOT_PASSWORD=example-minio-secret-access-key
AWS_S3_ENDPOINT_URL=http://localhost:${MINIO_API_PORT}
AWS_S3_PUBLIC_ENDPOINT_URL=http://localhost:${MINIO_API_PORT}
AWS_S3_ACCESS_KEY_ID=example-minio-access-key-id
AWS_S3_SECRET_ACCESS_KEY=example-minio-secret-access-key
AWS_S3_PUBLIC_BUCKET_NAME=public
AWS_S3_PROTECTED_BUCKET_NAME=protected
Commands¶
# Start
docker compose up -d minio
docker compose up minio-create-buckets
# View logs
docker compose logs -f minio
# Access console
open http://localhost:9001
# Stop
docker compose stop minio
Web console¶
Access MinIO console at http://localhost:9001
- Username: value of
MINIO_ROOT_USER - Password: value of
MINIO_ROOT_PASSWORD
Health checks¶
The api service healthcheck calls the existing HTTP health endpoint:
api:
healthcheck:
test:
- CMD
- python
- -c
- "import urllib.request; urllib.request.urlopen('http://127.0.0.1:8000/v1/health', timeout=8).read()"
GET /v1/health checks database connectivity, enqueues the built-in Celery
ping task with .adelay(), waits for the worker result, and forgets the
result after it is read. This keeps Docker health aligned with the real HTTP
readiness contract instead of adding a second Celery-only health entrypoint.
Init containers¶
Migrations¶
migrations:
<<: *common
command: python management/manage.py migrate --noinput
depends_on:
pgbouncer:
condition: service_healthy
Run:
Collect static¶
collectstatic:
<<: *common
command: python management/manage.py collectstatic --noinput
depends_on:
pgbouncer:
condition: service_healthy
minio-create-buckets:
condition: service_completed_successfully
Run:
AWS_S3_ENDPOINT_URL is the internal container endpoint, while AWS_S3_PUBLIC_ENDPOINT_URL
must be browser-reachable for Django admin static files.
Common operations¶
Start local services¶
# Local Docker PostgreSQL and Redis
docker compose up -d postgres redis
# If you selected local MinIO storage
docker compose up -d minio
docker compose up minio-create-buckets
Stop all services¶
Reset local Docker data¶
docker compose down -v # Remove volumes
docker compose up -d postgres redis
# If you selected local MinIO storage
docker compose up -d minio
docker compose up minio-create-buckets
docker compose up migrations collectstatic
View all logs¶
Check service status¶
Restart a service¶
Volumes¶
| Volume | Service | Purpose |
|---|---|---|
postgres_data |
PostgreSQL | Database files |
redis_data |
Redis | Persistence |
minio_data |
MinIO | Object storage |
Inspect volume¶
Remove volume¶
Network¶
All services connect to the default Compose network for inter-service communication.
Internal hostnames:
postgres- Databaseredis- Cacheminio- Object storage
Troubleshooting¶
Port already in use¶
Container won't start¶
Database connection refused¶
If you selected local Docker PostgreSQL, ensure it is running and healthy:
MinIO bucket not found¶
Run bucket creation:
Reset to clean state¶
docker compose down -v
docker compose up -d postgres redis
# If you selected local MinIO storage
docker compose up -d minio
docker compose up minio-create-buckets
docker compose up migrations collectstatic
Production considerations¶
For production deployments:
- Use managed services: database, cache, and object storage providers
- Set strong passwords: Don't use defaults
- Enable persistence: Configure backup strategies
- Use health checks: Add to compose file
- Set resource limits: Memory and CPU limits
Example health check: