Skip to content
🚧 These docs are a work in progress and may contain inaccuracies. Content is being actively reviewed and validated.

Networking

VariableDefaultDescription
PORT3000HTTP server port
HOST0.0.0.0Bind address (0.0.0.0 listens on all interfaces)

In Docker Compose, map the container port to your preferred host port:

ports:
- '8080:3000' # Access Dubby on port 8080

In Docker Compose, containers communicate by service name on a shared bridge network. Dubby connects to Valkey via redis://valkey:6379 without exposing any ports to the host.

The only port you need to expose is Dubby’s HTTP port for local access:

ports:
- '3000:3000'

Dubby listens on HTTP only and does not handle TLS. For HTTPS, place it behind a reverse proxy (Nginx, Caddy, Traefik, etc.) that terminates TLS and forwards traffic to Dubby.

Dubby reads the following headers from your reverse proxy for rate limiting and audit logging:

HeaderPurpose
X-Forwarded-ForClient IP (rightmost value is trusted)
X-Real-IPClient IP fallback (Nginx-style)

Make sure your reverse proxy sets these headers so rate limiting and audit logs use the real client IP instead of the proxy’s IP.

In development mode (NODE_ENV=development), Dubby automatically allows the request’s origin — no configuration needed.

In production mode, cross-origin requests are blocked unless explicitly allowed. Set allowed origins via environment variable or config file:

Terminal window
# Environment variable (JSON array)
DUBBY_SERVER_ALLOWED_ORIGINS='["https://dubby.example.com"]'
dubby.yaml
server:
allowedOrigins: ['https://dubby.example.com']

If Dubby and your web client are served from the same origin (the typical Docker setup), you don’t need to configure CORS.

Dubby uses Server-Sent Events (SSE) to push real-time updates to clients (workflow progress, scan status, etc.). The connection is kept alive with periodic pings to prevent intermediate proxies from closing idle connections.

VariableDefaultDescription
DUBBY_SERVER_SSE_PING_INTERVAL_MS30000Keepalive ping interval (5s–120s)

If you’re behind a reverse proxy, make sure its read timeout is longer than the ping interval (30 seconds by default). Most proxies default to 60 seconds, which works fine.

The server applies in-memory, per-IP rate limiting to prevent abuse. All limits are configurable via environment variables or the config file.

EndpointDefault limitDefault windowEnv var
Login5 attempts15 minutesDUBBY_RATE_LIMIT_LOGIN_MAX_ATTEMPTS
Registration3 attempts1 hourDUBBY_RATE_LIMIT_REGISTER_MAX_ATTEMPTS
Token refresh10 attempts1 minuteDUBBY_RATE_LIMIT_REFRESH_MAX_ATTEMPTS
EndpointDefault limitDefault windowEnv var
General API100 requests1 minuteDUBBY_RATE_LIMIT_API_MAX_REQUESTS
Speedtest3 requests1 minute— (not configurable)

Rate-limited responses return 429 Too Many Requests with a Retry-After header.

These endpoints require no authentication and are useful for load balancers and container orchestration:

EndpointPurposeResponse
GET /health/Basic health check{ status: "ok" }
GET /health/liveKubernetes liveness probe{ status: "alive" }
GET /health/readyReadiness probe (checks DB connectivity){ status: "ready" } or 503

The readiness endpoint returns HTTP 503 during startup and graceful shutdown, preventing traffic from reaching a pod that isn’t ready to serve requests.