Kubernetes
Dubby provides a Helm chart for Kubernetes deployments. The chart deploys three separate pods — server, worker, and web — plus an optional Valkey instance for job queues.
Architecture
Section titled “Architecture”| Pod | Purpose | Scalable |
|---|---|---|
| Server | HTTP API, streaming, transcoding | No (SQLite) |
| Worker | Background jobs (scanning, metadata, subtitles) | No (single queue) |
| Web | Static nginx SPA | Yes |
| Valkey | Redis-compatible job queue and real-time events | No |
Prerequisites
Section titled “Prerequisites”- Kubernetes cluster (k3s, k8s, EKS, etc.)
kubectlandhelmCLI tools- Persistent storage for
/data(database, images, trickplay) - Media files accessible to the cluster (NFS, hostPath, PVC)
Quick start
Section titled “Quick start”1. Create namespace and secrets
Section titled “1. Create namespace and secrets”kubectl create namespace dubby
kubectl create secret generic dubby-secrets -n dubby \ --from-literal=better-auth-secret="$(openssl rand -base64 32)" \ --from-literal=tmdb-api-key="your-tmdb-api-key"2. Install the chart
Section titled “2. Install the chart”helm install dubby oci://ghcr.io/dubbytv/dubby --version 0.1.0-beta.7 -n dubbyOr with custom values:
helm install dubby oci://ghcr.io/dubbytv/dubby --version 0.1.0-beta.7 -n dubby -f my-values.yamlCheck the releases page for the latest version.
3. Verify
Section titled “3. Verify”kubectl get pods -n dubby# All pods should be Running
kubectl logs deployment/dubby-server -n dubby --tail=5# Should show "Server starting on http://0.0.0.0:3000"Configuration
Section titled “Configuration”values.yaml reference
Section titled “values.yaml reference”For the full list of available options, run:
helm show values oci://ghcr.io/dubbytv/dubby --version 0.1.0-beta.7The key settings are documented below.
Server
Section titled “Server”server: replicaCount: 1 image: repository: dubbytv/server tag: latest pullPolicy: IfNotPresent service: type: ClusterIP port: 3000 resources: limits: cpu: '4' memory: 2Gi requests: cpu: 500m memory: 512MiCPU sizing:
| Hardware | Recommended CPU limit |
|---|---|
| With GPU | 4–6 cores (GPU handles encode, CPU does decode) |
| Without GPU | 50–75% of host cores |
Adjust cpu limits based on your hardware. A single 4K HEVC → H.264 software transcode can use 8–12 cores.
Memory: 2Gi is typically sufficient. Increase to 4–8Gi for large libraries with many concurrent streams.
Worker
Section titled “Worker”worker: replicaCount: 1 image: repository: dubbytv/worker tag: latest pullPolicy: IfNotPresent resources: limits: cpu: '2' memory: 2Gi requests: cpu: 250m memory: 256MiThe worker processes background jobs: library scanning, metadata enrichment, subtitle extraction, and trickplay generation. It has no HTTP server.
web: replicaCount: 1 image: repository: dubbytv/web tag: latest pullPolicy: IfNotPresent service: type: ClusterIP port: 80 resources: limits: cpu: 500m memory: 256Mi requests: cpu: 50m memory: 64MiThe web pod is stateless nginx serving the SPA. It can be scaled to multiple replicas.
Volumes
Section titled “Volumes”Each volume supports three modes:
| Mode | Use case | Example |
|---|---|---|
hostPath | Single-node / bare metal | hostPath: /apps/dubby |
existingClaim | Pre-created PVC (NFS, Ceph…) | existingClaim: dubby-data-pvc |
storageClass | Dynamic provisioning | storageClass: longhorn |
Priority: existingClaim > storageClass > hostPath.
Default (hostPath):
volumes: config: hostPath: /apps/dubby # → /data in container media: hostPath: /mnt/media # → /media in container cache: hostPath: /mnt/cache/dubby # → /cache in containerWith PVCs:
volumes: config: existingClaim: dubby-data # pre-created PVC media: existingClaim: media-nfs # NFS-backed PVC cache: storageClass: local-path # dynamically provisioned size: 50GiWhen using storageClass, the chart creates a PersistentVolumeClaim automatically. The size defaults to 10Gi if not specified.
| Mount | Purpose | Persistent | Back up? |
|---|---|---|---|
/data | Database, cached images, trickplay sprites | Yes | Yes |
/media | Media library (movies, TV shows) | Yes | No |
/cache | HLS transcode segments, extracted subtitles | No | No |
Secrets
Section titled “Secrets”The chart expects a pre-created Kubernetes secret:
existingSecret: dubby-secretsThe secret must contain:
| Key | Required | Description |
|---|---|---|
better-auth-secret | Yes | Session signing key. Min 32 characters. |
tmdb-api-key | No | TMDB API key for metadata. Can be set in the admin UI instead. |
Environment variables
Section titled “Environment variables”env: NODE_ENV: production PORT: '3000' HOST: '0.0.0.0' LOG_LEVEL: info DUBBY_DATA_DIR: /data DATABASE_URL: 'file:/data/dubby.db'REDIS_URL is automatically set to point to the Valkey service. BETTER_AUTH_SECRET and DUBBY_TMDB_API_KEY are injected from the Kubernetes secret.
Valkey
Section titled “Valkey”valkey: enabled: true image: valkey/valkey:latest resources: limits: cpu: '1' memory: 512MiValkey is deployed as an ephemeral single-pod service (data in emptyDir). It handles job queues and real-time events. If the pod restarts, pending jobs are re-queued on next scan.
GPU transcoding
Section titled “GPU transcoding”Intel (QSV / VAAPI)
Section titled “Intel (QSV / VAAPI)”The chart can deploy the Intel GPU device plugin as a DaemonSet:
gpu: enabled: true type: intel sharedDevNum: 10 # pods sharing the GPU concurrentlyThis deploys intel/intel-gpu-plugin and requests gpu.intel.com/i915: "1" on the server pod. The device plugin handles /dev/dri access and cgroup permissions.
Request the GPU in server resources:
server: resources: limits: gpu.intel.com/i915: '1'NVIDIA
Section titled “NVIDIA”For NVIDIA GPUs, deploy the NVIDIA device plugin separately, then request the GPU:
server: resources: limits: nvidia.com/gpu: '1'Set gpu.type: nvidia in values and remove the Intel resource request.
No GPU
Section titled “No GPU”Remove the GPU resource request from server.resources.limits. Dubby falls back to CPU-based FFmpeg transcoding (slower but functional).
Routing
Section titled “Routing”Dubby runs as two services: the server (API, streaming, transcoding) and web (static nginx SPA). Traffic must be split between them:
| Path prefix | Destination |
|---|---|
/api/*, /sse/*, /health/* | Server (port 3000) |
/* (everything else) | Web (port 80) |
The chart includes templates for both Gateway API and Ingress. Pick one — or bring your own routing.
Gateway API
Section titled “Gateway API”The Gateway API is the modern successor to Ingress. It works with any compliant implementation — Cilium, Envoy Gateway, Traefik, nginx-gateway-fabric, etc.
gateway: enabled: true host: dubby.example.com gatewayName: my-gateway # name of your Gateway resource gatewayNamespace: kube-system # namespace where the Gateway livesThe chart creates an HTTPRoute attached to your existing Gateway. You must already have a Gateway resource deployed by your controller — the chart does not create one.
Ingress
Section titled “Ingress”For clusters using a traditional Ingress controller (nginx, Traefik, HAProxy, etc.):
ingress: enabled: true className: nginx # your IngressClass name host: dubby.example.com annotations: {}TLS with cert-manager:
ingress: enabled: true className: nginx host: dubby.example.com annotations: cert-manager.io/cluster-issuer: letsencrypt-prod tls: - secretName: dubby-tls hosts: - dubby.example.comTraefik-specific annotations:
ingress: enabled: true className: traefik host: dubby.example.com annotations: traefik.ingress.kubernetes.io/router.entrypoints: websecureNo ingress controller
Section titled “No ingress controller”For testing or single-node clusters, you can skip ingress entirely and access Dubby via port-forwarding:
# Forward the server (API + streaming)kubectl port-forward -n dubby svc/dubby-server 3000:3000
# Forward the web UIkubectl port-forward -n dubby svc/dubby-web 8080:80Then open http://localhost:8080 in your browser. The web UI will connect to the server at localhost:3000.
Alternatively, change the service types to NodePort or LoadBalancer in your values:
server: service: type: NodePort port: 3000
web: service: type: NodePort port: 80Init containers
Section titled “Init containers”The server pod runs two init containers before starting:
- fix-permissions — Ensures
/dataand/cacheare owned by the app user (UID 1000) - migrate — Runs database migrations to keep the schema in sync
These run on every pod start, ensuring the database is always up to date after image upgrades.
Health checks
Section titled “Health checks”The chart configures liveness and readiness probes on the server:
livenessProbe: httpGet: path: /health/live port: http initialDelaySeconds: 10 periodSeconds: 30
readinessProbe: httpGet: path: /health/ready port: http initialDelaySeconds: 5 periodSeconds: 10The readiness probe checks database connectivity, so the pod won’t receive traffic until migrations are complete.
Config overrides
Section titled “Config overrides”Advanced server configuration can be provided via a ConfigMap:
config: transcoding: defaultCrf: 23 defaultPreset: fast hlsSegmentDuration: 4 session: maxTranscodesPerUser: 2 privacy: level: privateThese values take precedence over database settings but are overridden by DUBBY_* environment variables.
Scheduling
Section titled “Scheduling”All deployments support nodeSelector, tolerations, and affinity. Set them globally or per component:
# Global defaults (apply to server, worker, web, and valkey)nodeSelector: kubernetes.io/arch: amd64tolerations: - key: gpu operator: Exists effect: NoSchedule
# Per-component override (takes precedence over global)server: nodeSelector: node-role.kubernetes.io/media: '' affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: gpu.intel.com/device-id operator: ExistsAnnotations
Section titled “Annotations”Pod and service annotations can be set per component:
server: podAnnotations: prometheus.io/scrape: 'true' prometheus.io/port: '3000' service: annotations: external-dns.alpha.kubernetes.io/hostname: dubby.example.comUpdating
Section titled “Updating”helm upgrade dubby oci://ghcr.io/dubbytv/dubby \ --version 0.1.0-beta.7 \ -n dubby \ -f my-values.yamlThe migrate init container runs automatically on pod startup, so database schema updates are applied before the server starts accepting traffic.
To check your current chart version and roll back if needed:
# Current versionhelm list -n dubby
# Rollback to a previous revisionhelm history dubby -n dubbyhelm rollback dubby <revision> -n dubbySee Updating for release channels, version pinning, and rollback.
Uninstalling
Section titled “Uninstalling”helm uninstall dubby -n dubbyThis removes all pods and services but does not delete persistent data. Your /data directory (database, images) and /media files remain intact.
Troubleshooting
Section titled “Troubleshooting”Pod stuck in CrashLoopBackOff
Section titled “Pod stuck in CrashLoopBackOff”Check the logs for the failing container:
kubectl logs -n dubby deployment/dubby-server --previousCommon causes:
| Symptom in logs | Fix |
|---|---|
EACCES or permission errors on /data | The init container needs to run as root to fix ownership. Ensure securityContext.runAsUser is set (not 0) so the fix-permissions init container runs. |
missing secret or BETTER_AUTH_SECRET empty | Create the dubby-secrets secret — see Secrets. |
| Migration error | Check the migrate init container logs: kubectl logs -n dubby deployment/dubby-server -c migrate. Usually a corrupt or locked database file. |
Pod stuck in Pending
Section titled “Pod stuck in Pending”kubectl describe pod -n dubby -l app.kubernetes.io/component=serverLook at the Events section at the bottom:
| Event message | Fix |
|---|---|
Insufficient gpu.intel.com/i915 | The GPU device plugin isn’t running, or the node has no Intel GPU. Check kubectl get nodes -o json | jq '.items[].status.capacity'. |
Insufficient cpu or Insufficient memory | Lower the resource requests in values, or add nodes. |
0/N nodes are available: persistentvolumeclaim not found | The PVC doesn’t exist. If using existingClaim, create the PVC first. If using storageClass, verify the storage class exists: kubectl get sc. |
0/N nodes are available: node(s) didn't match Pod's node selector | Update nodeSelector to match your node labels. Check with kubectl get nodes --show-labels. |
UI loads but API calls fail
Section titled “UI loads but API calls fail”The web UI loads from the web pod, but API calls go to the server pod. If the UI shows a blank screen or network errors:
- Verify the server is running:
kubectl logs -n dubby deployment/dubby-server --tail=5 - Test the API directly:
kubectl exec -n dubby deployment/dubby-web -- wget -qO- http://dubby-server:3000/health/live - Check your ingress/gateway routes —
/api/*,/sse/*, and/health/*must all route to the server service.
With nginx ingress, ensure path rules use Prefix matching (the chart does this by default). If you’re using a custom ingress, a common mistake is routing only /api but missing /sse and /health.
SSE or streaming disconnects
Section titled “SSE or streaming disconnects”Server-Sent Events and HLS streaming require long-lived connections. If events stop arriving or playback cuts out after 30–60 seconds:
nginx ingress — increase proxy timeouts:
ingress: annotations: nginx.ingress.kubernetes.io/proxy-read-timeout: '86400' nginx.ingress.kubernetes.io/proxy-send-timeout: '86400' nginx.ingress.kubernetes.io/proxy-buffering: 'off'Traefik — disable response buffering:
ingress: annotations: traefik.ingress.kubernetes.io/router.middlewares: ''Cloud load balancers (AWS ALB, GCP LB) often have a default idle timeout of 60 seconds. Increase it to at least 3600 seconds for streaming.
Permission denied on volumes
Section titled “Permission denied on volumes”If using hostPath, the directories must be writable by UID 1000 (the app user). The fix-permissions init container handles /data and /cache, but the host directories must exist:
# On the nodesudo mkdir -p /apps/dubby /mnt/cache/dubbysudo chown -R 1000:1000 /apps/dubby /mnt/cache/dubbyIf the init container itself fails with permission errors, your cluster may have a restrictive PodSecurityPolicy or PodSecurityStandard that blocks runAsUser: 0. Either allow the init container to run as root, or pre-create the directories with correct ownership.
Media not found after scanning
Section titled “Media not found after scanning”The path you configure in the Dubby UI as a library root must match the container mount path, not the host path. Media is always mounted at /media inside the container. If your host path is /mnt/storage/media/movies, add /media/movies as the library path in the UI.
View logs
Section titled “View logs”# Server logs (API + transcoding)kubectl logs -n dubby deployment/dubby-server -f
# Worker logs (background jobs)kubectl logs -n dubby deployment/dubby-worker -f
# Init container logs (migrations)kubectl logs -n dubby deployment/dubby-server -c migrate