Skip to content
🚧 These docs are a work in progress and may contain inaccuracies. Content is being actively reviewed and validated.

Kubernetes

Dubby provides a Helm chart for Kubernetes deployments. The chart deploys three separate pods — server, worker, and web — plus an optional Valkey instance for job queues.

PodPurposeScalable
ServerHTTP API, streaming, transcodingNo (SQLite)
WorkerBackground jobs (scanning, metadata, subtitles)No (single queue)
WebStatic nginx SPAYes
ValkeyRedis-compatible job queue and real-time eventsNo
  • Kubernetes cluster (k3s, k8s, EKS, etc.)
  • kubectl and helm CLI tools
  • Persistent storage for /data (database, images, trickplay)
  • Media files accessible to the cluster (NFS, hostPath, PVC)
Terminal window
kubectl create namespace dubby
kubectl create secret generic dubby-secrets -n dubby \
--from-literal=better-auth-secret="$(openssl rand -base64 32)" \
--from-literal=tmdb-api-key="your-tmdb-api-key"
Terminal window
helm install dubby oci://ghcr.io/dubbytv/dubby --version 0.1.0-beta.7 -n dubby

Or with custom values:

Terminal window
helm install dubby oci://ghcr.io/dubbytv/dubby --version 0.1.0-beta.7 -n dubby -f my-values.yaml

Check the releases page for the latest version.

Terminal window
kubectl get pods -n dubby
# All pods should be Running
kubectl logs deployment/dubby-server -n dubby --tail=5
# Should show "Server starting on http://0.0.0.0:3000"

For the full list of available options, run:

Terminal window
helm show values oci://ghcr.io/dubbytv/dubby --version 0.1.0-beta.7

The key settings are documented below.

server:
replicaCount: 1
image:
repository: dubbytv/server
tag: latest
pullPolicy: IfNotPresent
service:
type: ClusterIP
port: 3000
resources:
limits:
cpu: '4'
memory: 2Gi
requests:
cpu: 500m
memory: 512Mi

CPU sizing:

HardwareRecommended CPU limit
With GPU4–6 cores (GPU handles encode, CPU does decode)
Without GPU50–75% of host cores

Adjust cpu limits based on your hardware. A single 4K HEVC → H.264 software transcode can use 8–12 cores.

Memory: 2Gi is typically sufficient. Increase to 4–8Gi for large libraries with many concurrent streams.

worker:
replicaCount: 1
image:
repository: dubbytv/worker
tag: latest
pullPolicy: IfNotPresent
resources:
limits:
cpu: '2'
memory: 2Gi
requests:
cpu: 250m
memory: 256Mi

The worker processes background jobs: library scanning, metadata enrichment, subtitle extraction, and trickplay generation. It has no HTTP server.

web:
replicaCount: 1
image:
repository: dubbytv/web
tag: latest
pullPolicy: IfNotPresent
service:
type: ClusterIP
port: 80
resources:
limits:
cpu: 500m
memory: 256Mi
requests:
cpu: 50m
memory: 64Mi

The web pod is stateless nginx serving the SPA. It can be scaled to multiple replicas.

Each volume supports three modes:

ModeUse caseExample
hostPathSingle-node / bare metalhostPath: /apps/dubby
existingClaimPre-created PVC (NFS, Ceph…)existingClaim: dubby-data-pvc
storageClassDynamic provisioningstorageClass: longhorn

Priority: existingClaim > storageClass > hostPath.

Default (hostPath):

volumes:
config:
hostPath: /apps/dubby # → /data in container
media:
hostPath: /mnt/media # → /media in container
cache:
hostPath: /mnt/cache/dubby # → /cache in container

With PVCs:

volumes:
config:
existingClaim: dubby-data # pre-created PVC
media:
existingClaim: media-nfs # NFS-backed PVC
cache:
storageClass: local-path # dynamically provisioned
size: 50Gi

When using storageClass, the chart creates a PersistentVolumeClaim automatically. The size defaults to 10Gi if not specified.

MountPurposePersistentBack up?
/dataDatabase, cached images, trickplay spritesYesYes
/mediaMedia library (movies, TV shows)YesNo
/cacheHLS transcode segments, extracted subtitlesNoNo

The chart expects a pre-created Kubernetes secret:

existingSecret: dubby-secrets

The secret must contain:

KeyRequiredDescription
better-auth-secretYesSession signing key. Min 32 characters.
tmdb-api-keyNoTMDB API key for metadata. Can be set in the admin UI instead.
env:
NODE_ENV: production
PORT: '3000'
HOST: '0.0.0.0'
LOG_LEVEL: info
DUBBY_DATA_DIR: /data
DATABASE_URL: 'file:/data/dubby.db'

REDIS_URL is automatically set to point to the Valkey service. BETTER_AUTH_SECRET and DUBBY_TMDB_API_KEY are injected from the Kubernetes secret.

valkey:
enabled: true
image: valkey/valkey:latest
resources:
limits:
cpu: '1'
memory: 512Mi

Valkey is deployed as an ephemeral single-pod service (data in emptyDir). It handles job queues and real-time events. If the pod restarts, pending jobs are re-queued on next scan.

The chart can deploy the Intel GPU device plugin as a DaemonSet:

gpu:
enabled: true
type: intel
sharedDevNum: 10 # pods sharing the GPU concurrently

This deploys intel/intel-gpu-plugin and requests gpu.intel.com/i915: "1" on the server pod. The device plugin handles /dev/dri access and cgroup permissions.

Request the GPU in server resources:

server:
resources:
limits:
gpu.intel.com/i915: '1'

For NVIDIA GPUs, deploy the NVIDIA device plugin separately, then request the GPU:

server:
resources:
limits:
nvidia.com/gpu: '1'

Set gpu.type: nvidia in values and remove the Intel resource request.

Remove the GPU resource request from server.resources.limits. Dubby falls back to CPU-based FFmpeg transcoding (slower but functional).

Dubby runs as two services: the server (API, streaming, transcoding) and web (static nginx SPA). Traffic must be split between them:

Path prefixDestination
/api/*, /sse/*, /health/*Server (port 3000)
/* (everything else)Web (port 80)

The chart includes templates for both Gateway API and Ingress. Pick one — or bring your own routing.

The Gateway API is the modern successor to Ingress. It works with any compliant implementation — Cilium, Envoy Gateway, Traefik, nginx-gateway-fabric, etc.

gateway:
enabled: true
host: dubby.example.com
gatewayName: my-gateway # name of your Gateway resource
gatewayNamespace: kube-system # namespace where the Gateway lives

The chart creates an HTTPRoute attached to your existing Gateway. You must already have a Gateway resource deployed by your controller — the chart does not create one.

For clusters using a traditional Ingress controller (nginx, Traefik, HAProxy, etc.):

ingress:
enabled: true
className: nginx # your IngressClass name
host: dubby.example.com
annotations: {}

TLS with cert-manager:

ingress:
enabled: true
className: nginx
host: dubby.example.com
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
tls:
- secretName: dubby-tls
hosts:
- dubby.example.com

Traefik-specific annotations:

ingress:
enabled: true
className: traefik
host: dubby.example.com
annotations:
traefik.ingress.kubernetes.io/router.entrypoints: websecure

For testing or single-node clusters, you can skip ingress entirely and access Dubby via port-forwarding:

Terminal window
# Forward the server (API + streaming)
kubectl port-forward -n dubby svc/dubby-server 3000:3000
# Forward the web UI
kubectl port-forward -n dubby svc/dubby-web 8080:80

Then open http://localhost:8080 in your browser. The web UI will connect to the server at localhost:3000.

Alternatively, change the service types to NodePort or LoadBalancer in your values:

server:
service:
type: NodePort
port: 3000
web:
service:
type: NodePort
port: 80

The server pod runs two init containers before starting:

  1. fix-permissions — Ensures /data and /cache are owned by the app user (UID 1000)
  2. migrate — Runs database migrations to keep the schema in sync

These run on every pod start, ensuring the database is always up to date after image upgrades.

The chart configures liveness and readiness probes on the server:

livenessProbe:
httpGet:
path: /health/live
port: http
initialDelaySeconds: 10
periodSeconds: 30
readinessProbe:
httpGet:
path: /health/ready
port: http
initialDelaySeconds: 5
periodSeconds: 10

The readiness probe checks database connectivity, so the pod won’t receive traffic until migrations are complete.

Advanced server configuration can be provided via a ConfigMap:

config:
transcoding:
defaultCrf: 23
defaultPreset: fast
hlsSegmentDuration: 4
session:
maxTranscodesPerUser: 2
privacy:
level: private

These values take precedence over database settings but are overridden by DUBBY_* environment variables.

All deployments support nodeSelector, tolerations, and affinity. Set them globally or per component:

# Global defaults (apply to server, worker, web, and valkey)
nodeSelector:
kubernetes.io/arch: amd64
tolerations:
- key: gpu
operator: Exists
effect: NoSchedule
# Per-component override (takes precedence over global)
server:
nodeSelector:
node-role.kubernetes.io/media: ''
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: gpu.intel.com/device-id
operator: Exists

Pod and service annotations can be set per component:

server:
podAnnotations:
prometheus.io/scrape: 'true'
prometheus.io/port: '3000'
service:
annotations:
external-dns.alpha.kubernetes.io/hostname: dubby.example.com
Terminal window
helm upgrade dubby oci://ghcr.io/dubbytv/dubby \
--version 0.1.0-beta.7 \
-n dubby \
-f my-values.yaml

The migrate init container runs automatically on pod startup, so database schema updates are applied before the server starts accepting traffic.

To check your current chart version and roll back if needed:

Terminal window
# Current version
helm list -n dubby
# Rollback to a previous revision
helm history dubby -n dubby
helm rollback dubby <revision> -n dubby

See Updating for release channels, version pinning, and rollback.

Terminal window
helm uninstall dubby -n dubby

This removes all pods and services but does not delete persistent data. Your /data directory (database, images) and /media files remain intact.

Check the logs for the failing container:

Terminal window
kubectl logs -n dubby deployment/dubby-server --previous

Common causes:

Symptom in logsFix
EACCES or permission errors on /dataThe init container needs to run as root to fix ownership. Ensure securityContext.runAsUser is set (not 0) so the fix-permissions init container runs.
missing secret or BETTER_AUTH_SECRET emptyCreate the dubby-secrets secret — see Secrets.
Migration errorCheck the migrate init container logs: kubectl logs -n dubby deployment/dubby-server -c migrate. Usually a corrupt or locked database file.
Terminal window
kubectl describe pod -n dubby -l app.kubernetes.io/component=server

Look at the Events section at the bottom:

Event messageFix
Insufficient gpu.intel.com/i915The GPU device plugin isn’t running, or the node has no Intel GPU. Check kubectl get nodes -o json | jq '.items[].status.capacity'.
Insufficient cpu or Insufficient memoryLower the resource requests in values, or add nodes.
0/N nodes are available: persistentvolumeclaim not foundThe PVC doesn’t exist. If using existingClaim, create the PVC first. If using storageClass, verify the storage class exists: kubectl get sc.
0/N nodes are available: node(s) didn't match Pod's node selectorUpdate nodeSelector to match your node labels. Check with kubectl get nodes --show-labels.

The web UI loads from the web pod, but API calls go to the server pod. If the UI shows a blank screen or network errors:

  1. Verify the server is running: kubectl logs -n dubby deployment/dubby-server --tail=5
  2. Test the API directly: kubectl exec -n dubby deployment/dubby-web -- wget -qO- http://dubby-server:3000/health/live
  3. Check your ingress/gateway routes — /api/*, /sse/*, and /health/* must all route to the server service.

With nginx ingress, ensure path rules use Prefix matching (the chart does this by default). If you’re using a custom ingress, a common mistake is routing only /api but missing /sse and /health.

Server-Sent Events and HLS streaming require long-lived connections. If events stop arriving or playback cuts out after 30–60 seconds:

nginx ingress — increase proxy timeouts:

ingress:
annotations:
nginx.ingress.kubernetes.io/proxy-read-timeout: '86400'
nginx.ingress.kubernetes.io/proxy-send-timeout: '86400'
nginx.ingress.kubernetes.io/proxy-buffering: 'off'

Traefik — disable response buffering:

ingress:
annotations:
traefik.ingress.kubernetes.io/router.middlewares: ''

Cloud load balancers (AWS ALB, GCP LB) often have a default idle timeout of 60 seconds. Increase it to at least 3600 seconds for streaming.

If using hostPath, the directories must be writable by UID 1000 (the app user). The fix-permissions init container handles /data and /cache, but the host directories must exist:

Terminal window
# On the node
sudo mkdir -p /apps/dubby /mnt/cache/dubby
sudo chown -R 1000:1000 /apps/dubby /mnt/cache/dubby

If the init container itself fails with permission errors, your cluster may have a restrictive PodSecurityPolicy or PodSecurityStandard that blocks runAsUser: 0. Either allow the init container to run as root, or pre-create the directories with correct ownership.

The path you configure in the Dubby UI as a library root must match the container mount path, not the host path. Media is always mounted at /media inside the container. If your host path is /mnt/storage/media/movies, add /media/movies as the library path in the UI.

Terminal window
# Server logs (API + transcoding)
kubectl logs -n dubby deployment/dubby-server -f
# Worker logs (background jobs)
kubectl logs -n dubby deployment/dubby-worker -f
# Init container logs (migrations)
kubectl logs -n dubby deployment/dubby-server -c migrate