Zerobyte: A Web UI for restic That Replaces Your Backup Scripts

Zerobyte wraps the restic backup engine in a modern web dashboard—scheduled jobs, visual retention policies, and point-and-click restores. Here’s how to set it up and how it compares to a CLI-only pipeline.

Server room with storage drives representing backup infrastructure
Photo by Jandira Sonnendeck / Unsplash

A while back I wrote about building an encrypted backup pipeline with restic, rest‑server, Hetzner Storage Box, and Tailscale. That setup has served me well—it’s reliable, fast, and entirely under my control. But it comes with overhead: Bash scripts, cron jobs, Telegram notification glue, and enough moving parts that onboarding a new machine means copying files around and hoping you remembered every environment variable.

I recently discovered Zerobyte, an open-source project that wraps restic in a clean web UI. It keeps everything I care about—client-side encryption, deduplication, flexible backends—and replaces the scripts and cron with a visual dashboard. Here’s how it works and why I think it’s worth your attention.


What Is Zerobyte?

Zerobyte is a self-hosted backup automation tool built on top of restic. It runs as a single Docker container and gives you a web interface to:

  • Define volumes — The source directories you want to back up (local paths, NFS, SMB, WebDAV, SFTP).
  • Create repositories — Encrypted backup destinations: local disk, S3-compatible storage (AWS, MinIO, Wasabi), Google Cloud, Azure, or 40+ cloud providers via rclone.
  • Schedule backup jobs — Cron-like scheduling with visual configuration, include/exclude patterns, and retention policies.
  • Monitor and restore — Browse snapshots, check job history, and restore files—all from the browser.

The key point: restic does all the heavy lifting underneath. Zerobyte doesn’t reinvent encryption or deduplication. It’s a management layer, and a good one.


Why Not Just Keep the CLI Pipeline?

My existing setup works. So why consider Zerobyte?

CLI Pipeline Zerobyte
Setup per host Copy script, set env vars, add cron Point UI at source directory
Scheduling Crontab entries Built-in visual scheduler
Monitoring Telegram bot + log files Web dashboard with job history
Restoring files SSH in, run restic restore Point-and-click in browser
Adding a new backend Edit script, test manually Select from dropdown
Multi-host overview Check each host individually Single dashboard
Flexibility Unlimited (it’s a shell script) Covers ~90% of use cases

The CLI approach gives you maximum control. If you enjoy writing shell scripts and want to customize every detail, it’s the right choice. But if you find yourself managing backups for multiple machines and wanting a single pane of glass, Zerobyte removes a lot of friction.


Deploying Zerobyte

Docker Compose

services:
  zerobyte:
    image: ghcr.io/nicotsx/zerobyte:latest
    restart: unless-stopped
    ports:
      - "4096:4096"
    cap_add:
      - SYS_ADMIN
    devices:
      - /dev/fuse:/dev/fuse
    environment:
      - BASE_URL=https://zerobyte.example.com
      - APP_SECRET=<run: openssl rand -hex 32>
      - TZ=Europe/Paris
    volumes:
      - zerobyte-data:/var/lib/zerobyte
      # Mount source directories read-only:
      - /tank/photos:/mnt/photos:ro
      - /tank/cloud:/mnt/cloud:ro
      - /tank/proxmox-backups:/mnt/proxmox:ro

volumes:
  zerobyte-data:

A few notes:

  • SYS_ADMIN and /dev/fuse are required for mounting remote volumes (NFS, SMB) inside the container. If you’re only backing up local directories, you can drop both.
  • APP_SECRET must be at least 32 characters. Generate it with openssl rand -hex 32.
  • BASE_URL determines whether cookies use the Secure flag. Set it to your actual HTTPS URL.
  • Keep /var/lib/zerobyte on local storage, not a network share. This is where Zerobyte stores its database and configuration.

If you’re already running Traefik (as I described in my Traeflare post), just add the appropriate labels and drop the ports section.


Setting Up Your First Backup

Once the container is running, open the web UI and:

  1. Create an admin account on first launch.
  2. Add a Volume — Point to a mounted source directory (e.g., /mnt/photos). This is the data you want to protect.
  3. Create a Repository — Choose your storage backend. For a setup similar to my existing pipeline, use an S3-compatible backend or a rest-server URL. Set a strong encryption password—this is your restic repository password.
  4. Configure a Backup Job — Link the volume to the repository. Set your schedule (e.g., daily at 03:00) and retention policy (7 daily, 4 weekly, 12 monthly, 2 yearly—same as my CLI setup).
  5. Run it — Trigger the first backup manually to verify everything works. Watch the progress in the dashboard.

That’s it. No scripts, no cron, no Telegram bot setup. The dashboard shows you job status, last run time, snapshot count, and storage usage.


Using Zerobyte With Your Existing Restic Repos

This is the part that sold me: Zerobyte uses restic under the hood, so you can point it at repositories you’ve already created. If you followed my previous guide and have a rest-server running on your Tailscale network, just create a new repository in Zerobyte with the REST backend URL and your existing password. Your snapshot history, deduplication data—everything is preserved.

This means migration isn’t a rip-and-replace. You can run both side by side, verify Zerobyte is working correctly, then retire the cron jobs at your own pace.


Backed Destinations Worth Considering

Zerobyte supports more backends than a typical CLI restic setup out of the box:

  • Hetzner Storage Box — Via SMB/SFTP volume or through a rest-server (my current approach).
  • S3-compatible — AWS S3, MinIO, Wasabi, Backblaze B2. Great if you want object storage pricing.
  • Google Cloud Storage / Azure Blob — Native support, no rclone needed.
  • rclone remotes — Google Drive, Dropbox, OneDrive, and 40+ other providers. Useful for free-tier backup destinations.
  • Local disk — An attached USB drive or a second NAS. Simple and fast for a local recovery copy.

Things to Keep in Mind

  • Zerobyte is still pre-1.0 (currently v0.x). The developer is actively collecting feedback and expects breaking changes between versions. It’s solid for homelab use, but I wouldn’t bet a production environment on it just yet.
  • Don’t expose the UI to the internet without authentication. Put it behind Traefik with Pocket ID, Authelia, or at minimum HTTP basic auth. Or just keep it on your Tailscale network.
  • Test your restores. This advice hasn’t changed since my last post: backups you never test are just archives. Zerobyte makes restoring easier, but you still need to verify the output.

Wrapping Up

My CLI-based restic pipeline isn’t going anywhere—it’s battle-tested and I trust it. But Zerobyte is exactly the kind of tool I wish existed when I first set it up. It takes the same engine, wraps it in a UI that makes scheduling and monitoring trivial, and lowers the barrier for anyone who doesn’t want to maintain shell scripts.

If you’re starting fresh with homelab backups, Zerobyte is the easier on-ramp. If you already have a restic setup, it’s a smooth upgrade. Either way, the fundamentals haven’t changed: encrypt everything, store it off-site, and test your restores.

This website respects your privacy and does not use cookies for tracking purposes. More information