From Single-Box Dreams to Multi-Node Reality: A Homelab Rebuild Story


Like most homelab projects, this one started simple: take an old Acer GX-785 desktop, slap Proxmox on it, run a few containers, and call it done. The plan was elegant in its simplicity — one box, one hypervisor, a clean stack of services including Jellyfin for media, FileBrowser for file management, and a handful of other containers that would make my digital life easier.

That plan lasted until the motherboard died halfway through the build.

The Original Vision

The Acer was my Phase 1 hardware — repurposing old equipment to get my haggard lab back up and running like it should be post move. I’d documented the whole architecture: Proxmox VE 9 as the hypervisor, Docker managed through Portainer, and a services stack that included Jellyfin with hardware transcoding via an RX 580, RomM for game management, Nginx Proxy Manager for routing, Homepage for a clean dashboard, and Watchtower to keep everything updated.

I even had Phase 2 services mapped out — Navidrome for music streaming, Audiobookshelf, Immich for photos, and Ollama with Open WebUI for local AI experiments. The whole thing was documented in a self-contained HTML guide (courtesy of my ride along, Claude) because if you’re going to build something, you might as well document it properly.

The plan was solid. The hardware had other ideas.

Back to Square One

When the Acer motherboard finally gave up — right in the middle of getting services configured — I faced a choice: find another single box to replace it, or rethink the whole approach. Single boxes are clean and simple, but they’re also single points of failure — as I’d just learned the hard way.

That’s when I remembered the pile of old desktops collecting dust. None of them individually powerful enough to be interesting, but collectively? Maybe there was something there.

The Multi-Node Pivot

Instead of replacing the Acer with another single machine, I decided to embrace distributed computing at home scale. Three old desktops, none of them particularly impressive on their own, but together forming a small Proxmox cluster.

The architecture is straightforward: each node runs Proxmox VE, and they form a cluster that can migrate VMs and containers between machines. If one node goes down — whether it’s hardware failure, maintenance, or just because I wanted to tinker with something — the services keep running on the other nodes.

The container stack that started as “a few useful services” has, predictably, gotten entirely out of hand. What began as Jellyfin and a file manager has evolved into a small ecosystem of interconnected services, each solving some small problem I didn’t know I had until I had the infrastructure to solve it.

What I’ve Learned

Hardware redundancy beats hardware quality. Three mediocre machines are more reliable than one good machine, and they’re definitely more reliable than one good machine with a failed motherboard sitting in a closet.

Documentation (as always) pays dividends. Having that HTML guide meant I could rebuild the service stack without having to remember (or re-research) Docker networking, Proxmox storage configuration, or which containers needed which environment variables. Future-me appreciated past-me’s neurotic note-taking.

Container sprawl is real. What starts as “I’ll just add one more service” quickly becomes a dependency web of databases, reverse proxies, monitoring tools, and utility containers that somehow all seem essential once you have them running.

Proxmox clustering is surprisingly straightforward. I expected distributed computing to be complex and fragile. In practice, the Proxmox cluster management handles most of the complexity, and the result feels much more robust than the single-box approach.

Current State and What’s Next

Right now, the cluster is running the core services that make daily life easier — media streaming, file management, network administration tools, and a growing collection of containers that I’m probably the only person who finds useful.

The Phase 2 services (Navidrome, Audiobookshelf, Immich, local AI) are still on the roadmap, but they’re no longer blocked by hardware acquisition. Adding services to an existing cluster is a different problem than building everything from scratch on new hardware.

Tailscale integration is the next major project — being able to access all of this remotely without opening ports or dealing with VPN server configuration is appealing enough to be worth the setup effort.

Why Failure Made It Better

The best homelab architecture isn’t the most elegant one you can design — it’s the one that keeps working when individual pieces fail. Single points of failure are fine in theory, but reality has a way of testing your assumptions at exactly the wrong time.

That Acer motherboard dying was frustrating in the moment, but it forced me to build something more resilient than what I’d originally planned. Sometimes the setback teaches you more than getting it right the first time would have.

Plus, now I have three times as many things to tinker with. That might be a feature or a bug, depending on how you look at it.


Got questions about Proxmox clustering or want to share your own homelab disaster stories? Comments are open below, or find me on LinkedIn.

Leave a comment