$ cd ~/posts/home-server-proxmox-plex-arr-stack

Home Server with Proxmox, Plex, and the *arr Stack

$ date → 1 May 2026

home-server · proxmox · docker · plex · sonarr · radarr · nzbget · nzbhydra · selfhosted

A self-hosted media server is one of those projects that pays back its setup cost every single evening. Once Plex is wired to Sonarr and Radarr, and Sonarr/Radarr are wired to NZBHydra2 and NZBGet, you stop hunting for content. You tell the system what you want and it shows up on the TV.

This post documents the exact layout I run at home: Proxmox on the metal, a Debian VM hosting Docker, and a single Compose stack for the media services. Everything below is what actually runs, lifted from my home-server repo, not a generic tutorial.

Use this for your own libraries.

How the pieces fit

There are two views worth having in your head: the control plane (who talks to whom to get a file) and the data plane (where the file actually lives on disk).

Control plane

   ┌──────────────────────────────────────────────┐
   │                  Proxmox VE                  │
   │  (bare metal, ZFS pool, SAS SSDs via HBA)    │
   └───────────────────┬──────────────────────────┘

             ┌─────────▼─────────┐
             │   Debian VM       │
             │   docker engine   │
             └─────────┬─────────┘

            ┌──────────┴──────────┐
            ▼                     ▼
        ┌────────┐            ┌────────┐
        │ Sonarr │            │ Radarr │
        └────┬───┘            └────┬───┘
             │                     │
             └──────────┬──────────┘
                        ▼ search
                  ┌──────────┐
                  │NZBHydra2 │  aggregates indexers
                  └────┬─────┘
                       ▼ enqueue NZB
                   ┌───────┐
                   │NZBGet │  downloads from Usenet
                   └───────┘

Sonarr/Radarr decide what to grab, NZBHydra2 aggregates indexers and picks where to grab it from, NZBGet does the actual download. Plex stays out of this loop entirely; it only sees the finished files.

Data plane

Every container mounts the same vol-media-data Docker volume at /data. That single shared mount is what makes the pipeline cheap: NZBGet writes the file once, and Sonarr/Radarr can hardlink it into the media tree instead of copying. Plex then reads from that same tree.

              ┌──────────────────────────────────────────┐
              │   shared volume:  vol-media-data → /data │
              │                                          │
              │   /data                                  │
              │   ├── downloads/                         │
              │   │     └── nzbget/   ← NZBGet writes    │
              │   │                                      │
              │   └── media/                             │
              │         ├── tv/      ← Sonarr hardlinks  │
              │         └── movies/  ← Radarr hardlinks  │
              │                      ↑ Plex reads here   │
              └──────────────────────────────────────────┘
                       ▲       ▲       ▲        ▲
                       │       │       │        │
                       │       │       │        │
                   ┌───┴───┐ ┌─┴────┐ ┌┴─────┐ ┌┴──────┐
                   │NZBGet │ │Sonarr│ │Radarr│ │ Plex  │
                   │ (rw)  │ │ (rw) │ │ (rw) │ │  (r)  │
                   └───────┘ └──────┘ └──────┘ └───────┘

NZBHydra2 is the one service that doesn’t touch /data; it only brokers searches between the *arr apps and your indexers.

Two things this layout buys you:

  • Hardlinks instead of copies. A 30 GB movie doesn’t get duplicated when it moves from downloads/ to media/movies/; both paths point at the same inode. Disk usage stays flat, the move is instant.
  • One mount path everywhere. Sonarr’s “/data/downloads” is the same physical location as NZBGet’s “/data/downloads”. If the paths differed across containers, hardlinking would silently fall back to copying, and you’d burn disk for no reason.

Part 1: Proxmox host

Install

Grab the Proxmox VE ISO, flash it, install it. Pick ZFS on install if your hardware supports it. Snapshots and integrity checks are worth the RAM.

After first boot, remove the enterprise repo nag:

echo "deb http://download.proxmox.com/debian/pve $(lsb_release -cs) pve-no-subscription" \
  | sudo tee /etc/apt/sources.list.d/pve-no-subscription.list
sudo sed -i 's/^/#/' /etc/apt/sources.list.d/pve-enterprise.list
sudo apt update && sudo apt -y dist-upgrade

VM hardware

If you’re on enterprise SAS SSDs through an LSI HBA (my setup), the defaults leave performance on the table. Use:

  • SCSI Controller: VirtIO SCSI single, dedicated queues per disk.
  • Cache: Write back, safe with PLP-equipped enterprise drives.
  • Discard: Enabled, lets ZFS reclaim space.
  • IO Thread: Enabled, moves disk I/O off the main vCPU.

For ZFS hygiene on the Proxmox host:

# weekly fstrim across all mounted filesystems
sudo systemctl enable --now fstrim.timer

# weekly deep trim of the ZFS pool
( sudo crontab -l 2>/dev/null; echo "0 1 * * 0 /sbin/zpool trim rpool" ) | sudo crontab -

Check status with zpool status -t and watch latency with zpool iostat -r 5.

Storage layout

I keep media on a separate large dataset and mount it into the VM via virtio-blk. If you’d rather grow space later without rebooting the VM, put the media volume on LVM inside the guest. See my LVM guide for the mechanics.

Part 2: Debian guest VM

Inside the VM, install Debian (stable, minimal). Configure a static IP. DHCP for a server you’ll port-forward to is a footgun.

Then install Docker the official way, not the distro package, which is always months behind:

# from https://docs.docker.com/engine/install/debian/
curl -fsSL https://get.docker.com | sudo sh

# post-install: run docker without sudo
sudo usermod -aG docker $USER
newgrp docker

Verify:

docker run --rm hello-world

Part 3: The media stack

One Compose file runs the whole thing. The pattern is: a single vol-media-data volume mounted into every container at /data, so Sonarr/Radarr can hardlink (not copy) finished downloads from /data/downloads to /data/media. Hardlinks are why all the apps see the same /data path.

Create the directory structure inside the volume:

/data
├── downloads/
│   ├── nzbget/
│   │   ├── intermediate/
│   │   └── completed/
│   └── torrents/        # if you ever add one
└── media/
    ├── tv/
    └── movies/

docker-compose.yml

This is the entertainment stack I run, trimmed to the services this post covers:

x-common-env: &common-env
  PUID: ${PUID}
  PGID: ${PGID}
  TZ: ${TZ}

services:
  plex:
    image: lscr.io/linuxserver/plex:latest
    container_name: plex
    network_mode: host
    environment:
      <<: *common-env
      VERSION: docker
      PLEX_CLAIM: ${PLEX_CLAIM}
    volumes:
      - vol-plex-data:/config
      - vol-media-data:/data
      - ${PLEX_TRANSCODE_PATH:-/transcode}:/transcode
    restart: unless-stopped

  sonarr:
    image: lscr.io/linuxserver/sonarr:latest
    container_name: sonarr
    ports:
      - 127.0.0.1:8989:8989
    environment:
      <<: *common-env
    volumes:
      - vol-sonarr-data:/config
      - vol-media-data:/data
    restart: unless-stopped

  radarr:
    image: lscr.io/linuxserver/radarr:latest
    container_name: radarr
    ports:
      - 127.0.0.1:7878:7878
    environment:
      <<: *common-env
    volumes:
      - vol-radarr-data:/config
      - vol-media-data:/data
    restart: unless-stopped

  nzbget:
    image: lscr.io/linuxserver/nzbget:latest
    container_name: nzbget
    ports:
      - 127.0.0.1:6789:6789
    environment:
      <<: *common-env
      NZBGET_USER: ${NZBGET_USER}
      NZBGET_PASS: ${NZBGET_PASS}
    volumes:
      - vol-nzbget-data:/config
      - vol-media-data:/data
    restart: unless-stopped

  nzbhydra2:
    image: lscr.io/linuxserver/nzbhydra2:latest
    container_name: nzbhydra2
    ports:
      - 127.0.0.1:5076:5076
    environment:
      <<: *common-env
    volumes:
      - vol-nzbhydra-data:/config
      - vol-media-data:/data
    restart: unless-stopped

volumes:
  vol-plex-data:     { name: vol-plex-data }
  vol-sonarr-data:   { name: vol-sonarr-data }
  vol-radarr-data:   { name: vol-radarr-data }
  vol-nzbget-data:   { name: vol-nzbget-data }
  vol-nzbhydra-data: { name: vol-nzbhydra-data }
  vol-media-data:    { name: vol-media-data }

A few design notes that aren’t obvious from the YAML:

  • Plex runs in host mode because DLNA discovery and the GDM protocol break behind a docker bridge. Everything else is fine on the default network.
  • Web UIs bind to 127.0.0.1. Nothing is exposed to the LAN directly. External access goes through a reverse proxy (Cloudflare Tunnel below).
  • PUID / PGID must match the user that owns /data on the host. Use id to check. Mismatched UIDs are the #1 reason hardlinks “don’t work”: the container can’t write to the destination.
  • PLEX_CLAIM is a one-shot token from plex.tv/claim. It’s only used on first boot to bind the server to your account.

.env next to the compose file:

PUID=1000
PGID=1000
TZ=Europe/Zurich
PLEX_CLAIM=claim-xxxxxxxxxxxxxx
PLEX_TRANSCODE_PATH=/transcode
NZBGET_USER=admin
NZBGET_PASS=change-me

Bring it up:

docker compose up -d
docker compose ps

Part 4: Wiring the apps together

Order matters. Configure bottom-up: indexers → downloader → arr → Plex.

NZBHydra2 (indexer aggregator)

Open http://localhost:5076.

  1. Indexers → Add indexer. Add each Usenet indexer you subscribe to (e.g. NZBgeek, NZBPlanet, DrunkenSlug). NZBHydra normalizes them into one Newznab-compatible API.
  2. Downloaders → Add. Type NZBGet, host nzbget, port 6789, your credentials.
  3. Search. Run a test query to confirm at least one indexer returns results.

You now have one URL and one API key that fronts every indexer. Sonarr and Radarr only need to know about NZBHydra.

NZBGet (downloader)

Open http://localhost:6789. Default login is nzbget / tegbzn6789; change it to whatever you set in .env.

  1. Settings → News-Servers. Add your Usenet provider (host, port 563 with SSL, your credentials). Set Connections to whatever your plan allows, usually 20–50.
  2. Settings → Paths. MainDir should be /data/downloads/nzbget. DestDir resolves relative to that, leave defaults.
  3. Settings → Categories. Create tv and movies. These map to subfolders Sonarr and Radarr will watch.

Save and reload.

Sonarr (TV)

Open http://localhost:8989.

  1. Settings → Media Management. Enable Use Hardlinks instead of Copy. Set the root folder to /data/media/tv.
  2. Settings → Download Clients → Add → NZBGet. Host nzbget, port 6789, credentials from .env, category tv.
  3. Settings → Indexers → Add → Newznab. Point at NZBHydra2: URL http://nzbhydra2:5076, API key from NZBHydra. Categories: 5000–5999 (TV).
  4. Series → Add New. Pick a show. If everything is wired, Sonarr will search NZBHydra, send the NZB to NZBGet, and on completion hardlink the file into /data/media/tv.

Radarr (movies)

Identical to Sonarr, with three differences:

  • Root folder: /data/media/movies.
  • Download client category: movies.
  • Indexer categories: 2000–2999 (movies).

Plex (player)

Open http://localhost:32400/web.

  1. Add Library → TV Shows → /data/media/tv.
  2. Add Library → Movies → /data/media/movies.
  3. Under each library’s Advanced settings, switch the agent to Plex Movie / Plex TV Series (the modern agents, not the legacy ones).

Once Sonarr/Radarr drop a file, Plex picks it up on its next scan, usually within a minute or two.

Part 5: Exposing it safely

Don’t port-forward Plex/Sonarr to the open internet. Two options I’d recommend, in order of paranoia:

Cloudflare Tunnel + Access: free, no inbound ports, and you get an SSO gate in front of every UI. Steps:

  1. Create a tunnel at the Cloudflare Zero Trust dashboard, install cloudflared on the VM.
  2. For each service, add a published hostname pointing at the loopback port: sonarr.example.com → http://localhost:8989.
  3. Under Access → Applications, wrap each hostname in a policy (e.g. “email matches you@example.com”). Now even if someone discovers the URL, Cloudflare blocks them at the edge.

WireGuard VPN: also free, slightly more setup, but gives you full LAN access from your devices. The linuxserver/wireguard image is a one-liner to add to the same Compose file.

Plex is the exception: it has its own auth and relay, so you can leave it on network_mode: host and let plex.tv proxy remote connections without exposing anything.

Part 6: Maintenance

Things that bite you six months later if you skip them now:

  • Watchtower for image updates, but pin or pre-flight first; I’ve had Sonarr migrations bite during automated rolls. Start with email notifications only, no auto-apply.
  • Backups of /config volumes. Sonarr/Radarr databases are small and brittle. docker run --rm -v vol-sonarr-data:/data -v $PWD:/backup alpine tar czf /backup/sonarr.tgz /data is enough. Schedule it.
  • Disk monitoring. Media drives fill faster than you expect. Alert at 85%, panic at 95%. A 5-line script with df and an SMTP call gets the job done.
  • Plex transcode dir on tmpfs if you have spare RAM, saves your SSDs from the constant write churn.

Closing thoughts

The whole stack runs on a single mid-spec VM with 4 vCPUs and 8 GB of RAM, and idles around 5% CPU. The boring part is the wiring: hardlinks instead of copies, loopback-only ports, one indexer aggregator instead of N, one Compose file you can rebuild from scratch in five minutes.

Once it’s running, the only thing you’ll touch is Series → Add New.

$ auth --required

Enter your email to receive a code and read the rest of this post.