Building a Fast, Encrypted & Private Backup Pipeline with restic, rest‑server, Hetzner Storage Box & Tailscale VPN

Backup Storage Cloud Photo
Photo by Siyuan Hu / Unsplash
A step‑by‑step guide to reproducing (and understanding) the exact setup I use to keep my Proxmox homelab and personal data safe.

1. Why this stack?

Layer What it does Why I chose it
restic CLI backup tool with client‑side encryption, deduplication & fast incremental snapshots Small binary, no root needed, battle‑tested, easy restores
rest‑server Lightweight HTTP backend that speaks restic’s REST protocol Lower latency than SFTP/SSH when the repository lives on a remote CIFS/NFS mount (no double encryption, stream‑oriented)
Hetzner Storage Box Cheap off‑site storage available over SMB / Samba, SFTP, WebDAV & more Flat pricing, runs in the same DC as my (tiny) VPS, unlimited traffic
Tailscale WireGuard‑based overlay network End‑to‑end encryption, avoids public exposure & lets every client “see” the repo via an internal IP
Docker Compose Deploy rest‑server & its CIFS mount reproducibly One‑file infrastructure, easy upgrades & rollbacks

The result is a pull‑based model: every device (Proxmox node, NAS, laptop, …) runs restic and pushes its encrypted chunks to a central rest‑server that lives on my Hetzner VPS. The VPS itself merely forwards bytes to a CIFS share on the Storage Box; it never sees unencrypted data.


2. Provision your Storage Box & sub‑account

  1. Order a Storage Box (BX line is enough for backups).
  2. Inside the Hetzner Cloud console create a sub‑account with:
    • SMB/CIFS enabled
    • Strong, unique password

Note the UNC path; it looks like

//<username>.your-storagebox.de/<username>

Hetzner’s docs confirm the same syntax for CIFS mounting


3. Spin up a tiny VPS & join it to Tailscale

# 1 vCPU, 1 GB RAM is enough
hcloud server create --type cpx11 --image debian-12 --datacenter fsn1-dc14 \
                     --name backup-vps

curl -fsSL https://tailscale.com/install.sh | sh

sudo tailscale up --accept-routes --hostname restic-hub

Record the Tailscale IPv4 that the node receives (e.g. 100.x.y.z). Only this IP will be exposed from Docker.


4. Deploy rest‑server with Docker Compose

version: "3.9"

volumes:
  data:
    driver: local
    driver_opts:
      type: cifs
      device: "//<user>.your-storagebox.de/<user>"
      o: "username=<user>,password=<pass>,vers=3.0,uid=1000,gid=1000"

services:
  restic-server:
    image: restic/rest-server:latest
    restart: always
    user: "1000:1000"
    environment:
      - DISABLE_AUTHENTICATION=1      # repo itself is already encrypted
      - OPTIONS=--log -               # log to stdout for easy journald scraping
    ports:
      - "<tailscale_ip>:8000:8000"    # listen only on the Tailscale interface
    volumes:
      - data:/data

What’s happening under the hood?

  • The CIFS volume is mounted by Docker itself, so the host doesn’t need permanent fstab tweaks.
  • DISABLE_AUTHENTICATION=1 means there is no HTTP basic‑auth; access control is enforced by knowing the restic repository password and by being inside the Tailscale network. If you prefer user accounts, add a .htpasswd file and drop the env‑var
  • The OPTIONS flag is a neat way to enable Prometheus metrics later (--prometheus).
  • Binding to the Tailscale IP instead of 0.0.0.0 prevents accidental exposure to the public internet. Tailscale’s own Docker docs recommend the same pattern

Bring it up:

docker compose up -d

At this point a call to

curl http://<tailscale_ip>:8000/

returns 404 page not found, which simply means the repository directory is empty—time to initialize it from any client.


5. Installing Restic on your Proxmox host

apt update && apt install -y restic
echo "<super‑secret‑password>" > /etc/restic-key
chmod 600 /etc/restic-key

6. The backup script explained

Below is the full Bash helper I use. It does three jobs:

  1. Initialises the repo if it’s the first run.
  2. Runs restic backup … with a chosen tag.
  3. Enforces a retention policy (7 daily, 4 weekly, 12 monthly, 2 yearly) via restic forget --prune.

It also ships Telegram alerts on success/failure so I don’t have to tail logs at 3 am.

#!/usr/bin/env bash

# backup.sh - Backup script using restic

# Exit on errors or uninitialized variables
set -euo pipefail

# Repository
export RESTIC_REPOSITORY=rest:http://<tailscale_ip>:8000

# Password file
export RESTIC_PASSWORD_FILE=/etc/restic-key

# Retention policy (tweak as you see fit)
KEEP_DAILY=7
KEEP_WEEKLY=4
KEEP_MONTHLY=12
KEEP_YEARLY=2

# Telegram configuration
TELEGRAM_CHAT_ID=<telegram_chat_id>
TELEGRAM_BOT_TOKEN=<telegram_bot_token>

function telegram_notify() {
  local message="$1"

  curl -s -X POST "https://api.telegram.org/bot${TELEGRAM_BOT_TOKEN}/sendMessage" \
       -d chat_id="${TELEGRAM_CHAT_ID}" \
       -d parse_mode="Markdown" \
       --data-urlencode text="${message}" >/dev/null 2>&1
}

function initialize_repo() {
  restic snapshots > /dev/null 2>&1

  if [ $? -eq 0 ]; then
    echo "Restic repository already exists at ${RESTIC_REPOSITORY}."
  else
    echo "No restic repository found at ${RESTIC_REPOSITORY}. Initializing..."
    restic init

    telegram_notify "ℹ️  Initialized new restic repo at \`${RESTIC_REPOSITORY}\` on host: \`$(hostname)\`"
  fi
}

function backup() {
  local source_dir="${1}"
  local tag="${2}"

  echo "Starting backup of ${source_dir} (tag: ${tag})..."

  restic backup "${source_dir}" --tag "${tag}"

  echo "Backup task completed successfully."
  telegram_notify "✅ Restic backup of ${source_dir} (tag ${tag}) *completed successfully* on host: \`$(hostname)\`"
}

function cleanup() {
  echo "Applying retention policy (daily: ${KEEP_DAILY}, weekly: ${KEEP_WEEKLY}, monthly: ${KEEP_MONTHLY}, yearly: ${KEEP_YEARLY})..."

  restic forget \
    --keep-daily "${KEEP_DAILY}" \
    --keep-weekly "${KEEP_WEEKLY}" \
    --keep-monthly "${KEEP_MONTHLY}" \
    --keep-yearly "${KEEP_YEARLY}" \
    --prune

  echo "Cleanup completed."

  telegram_notify "🧹 Cleanup completed on host: \`$(hostname)\`"
}

function error_handler() {
  # If an error occurs anywhere in the script, we notify via Telegram
  telegram_notify "❌ Restic backup *FAILED* on host: \`$(hostname)\`, error: \`${1}\`"

  exit 1
}

trap error_handler ERR

## Main Script Execution
# based on arguments, determine whether to backup or cleanup

if [ "$#" -eq 0 ]; then
  echo "No arguments provided. Please specify 'backup' or 'cleanup'."

  exit 1
fi

# Check if the first argument is 'backup' or 'cleanup'
if [ "$1" == "backup" ]; then
  # Check if the second argument is provided
  if [ "$#" -ne 3 ]; then
    echo "Please provide a source directory and a tag for backup."

    exit 1
  fi

  SOURCE_DIR="$2"
  TAG="$3"

  # Initialize the repository
  initialize_repo

  # Perform backup
  backup "${SOURCE_DIR}" "${TAG}"

elif [ "$1" == "cleanup" ]; then
  # Perform cleanup
  cleanup

else
  echo "Invalid argument. Please specify 'backup' or 'cleanup'."
  exit 1
fi

7. Scheduling with crontab

Because the script bundles both the backup and retention logic, the crontab ends up extremely simple.
Here’s a practical example (sudo crontab -e on the Proxmox host):

# ┌─ minute (0‑59)
# │ ┌─ hour   (0‑23)
# │ │ ┌─ day‑of‑month (1‑31)
# │ │ │ ┌─ month      (1‑12)
# │ │ │ │ ┌─ day‑of‑week (0‑7)  (0|7 = Sunday)
# │ │ │ │ │
# │ │ │ │ │   command
# │ │ │ │ │
  15  3  *  *  *   /usr/local/bin/backup.sh backup /tank/cloud           cloud      >>/var/log/restic-cloud.log 2>&1
  45  3  *  *  *   /usr/local/bin/backup.sh backup /tank/photos          photos     >>/var/log/restic-photos.log 2>&1
  00  4  *  *  *   /usr/local/bin/backup.sh backup /tank/proxmox-backups proxmox    >>/var/log/restic-pbs.log    2>&1

# Weekly prune every Sunday at 05:00
  00  5  *  *  0   /usr/local/bin/backup.sh cleanup                                  >>/var/log/restic-cleanup.log 2>&1

Why cron, not Systemd?

  • ✅ Less moving parts — everyone understands crontab.
  • ✅ No unit files to manage; one‑liner edits are enough.
  • ✅ Logs are redirected to plain files and Telegram, so I still get alerts if something fails.

(If you prefer Systemd, just swap the cron entries for timers — the script itself doesn’t care.)


8. Restoration test (don’t skip this!)

restic -r rest:http://<tailscale_ip>:8000 \
       -p /etc/restic-key \
       restore latest --target /tmp/restore-test

Verify the checksum or boot a VM from the restored vzdumpbackups you never test are merely archives.


9. Hardening & extras

  • Enable HTTP auth on rest‑server (htpasswd -B /srv/data/.htpasswd <user>).
  • Use a read‑only sub‑account if you worry about ransomware deleting the repo.
  • Add OPTIONS="--prometheus" to export metrics and scrape them with Prometheus + Grafana
  • Patch automatically (Watchtower, dist‑upgrade).
  • Rotate the restic password periodically (restic key passwd).

10. TL;DR checklist

  1. 🔐 Generate a strong repository password (pwgen -s 32).
  2. 🗄️ Order Storage Box → enable SMB → create sub‑account.
  3. ☁️ Boot a 1 vCPU VPS in the same DC → join Tailscale.
  4. 🐳 Deploy the docker‑compose.yml above.
  5. 💻 Install restic on every client and drop the Bash script.
  6. 📅 Schedule with cron (hourly, nightly, weekly).
  7. 🧪 Restore a random file every month.
  8. 📈 Add Telegram to get eyes on the process.

Congratulations—you now have an encrypted, versioned, off‑site backup system that costs a few euros a month and scales from single‑board computers to entire datastores.

Happy backing up!

Read more

This website respects your privacy and does not use cookies for tracking purposes. More information