Skip to main content

Home Datacenter Tour (2025)

Table of Contents

Introduction

I can trace exactly what sparked my lifelong interest in self-hosting and Linux. Back in middle school, my friends and I wanted to play Minecraft together. While there were paid options for letting someone else handle it, I asked my dad and he provided me with what became my first home server: an HP Compaq DC7600 with a Pentium 4 processor, 4 GB of RAM, and a 160 GB hard drive. He installed Ubuntu Server on it and gave me some general tips to get started. That computer enabled me to teach myself so much about Linux.

To this day, I still self-host nearly everything and learn best through hands-on experimentation. Unfortunately, I don’t have that machine anymore, nor most of the other previous home servers I used during the rest of my childhood and early career. However, my setup and knowledge have evolved significantly since then. I’m particularly drawn to reading about how other people establish their home infrastructure, whether it’s a simple home server, a tinkering lab, or a full datacenter. There is no single correct approach to self-hosting, so every setup tells a unique story of its designer’s priorities, constraints, preferences, and evolution.

This post is a snapshot in time of my hardware and software choices, documenting what has worked, what hasn’t, and the reasoning behind my decisions. While I’m publishing this a little late, it captures what my setup looked like by the end of 2025. In fact, I’m already preparing to migrate my main server to Arch Linux as I write this. If I waited for things to settle before publishing, this post would never see the light of day since home infrastructure is always evolving. I’m writing this both as a reference for my future self and for others who might be considering their first steps into self-hosting.

Before diving in, it’s worth clarifying what I mean by home datacenter. The terms “home server”, “home lab”, and “home datacenter” are often used interchangeably in the self-hosting community, but I make a distinction. These days, I consider my network to be a home datacenter. While the scale is modest compared to many home datacenters you can find on Reddit, I view it as a step above a lab because my network runs production workloads for me and others. I believe this distinction matters because I have less room to tinker and experiment. If I break something, it affects services which people depend on. This production focus influences how I design everything: hardware choices, software configurations, and deployment practices. While I can’t match enterprise-level high availability, this mindset shapes my entire approach.

Network Architecture

Let’s start with a network diagram of my whole house.

Home network topology diagram with Verizon Business internet at the top connecting to three zones. The Main Network Rack contains a MikroTik CCR2004-1G-12S+2XS edge router, MikroTik CRS317-1G-16S+RM core switch, and Dell Precision 3460 main server. Various Locations Around House shows three Ruckus Wireless R510 access points serving wireless clients (phones, laptops, IoT) and wired clients (workstations, HTPCs). The Basement zone has a Dell PowerEdge T320 labeled Shitbox, a MikroTik CRS305-1G-4S+IN field switch, and a custom-built Hypervisor 01. Connection speeds are color-coded: blue for 10Gbit/s links between high-performance devices, green for 1Gbit/s links.
Network diagram of the whole house.

My connection to the internet is provided by Verizon Business. It uses the exact same underlying infrastructure as their residential service, but comes with static IP addresses and a better SLA. This is admittedly overkill for most home users, but I felt it was justified given the production workloads I host. The connection arrives at the core of my network: the main network rack. This rack houses my edge router, core switch, and main server. The edge router handles routing between my local network and the internet, while my core switch connects all the devices around the network. The main server hosts all of my personal services, including this website you’re viewing right now. The contents of this rack are relatively fixed and haven’t changed much since it was first set up about 2 years ago.

From the main network rack, I have a dedicated line running to my basement where I keep Hypervisor 01, a secondary server for testing and client workloads. Also in the basement is an old Dell PowerEdge T320 I keep as a spare for testing and migrations. It’s slow and power-hungry, so it is aptly named Shitbox. It usually sits offline and unused, but comes in handy at random times.

To connect everything, especially when going between rooms around the house, I primarily use fiber optic cabling. I chose fiber mostly as a learning opportunity since I had never worked with it before, but it also turned out to be much easier to run over the long distances throughout my house. Fiber handles 10 Gbit/s easily, whereas copper would require Cat6a cabling with more careful installation to achieve the same speeds. These connections serve user workstations, Wi-Fi access points, home-theater PCs, and everything else requiring network access.

Aqua fiber optic cables running along white crown molding at a ceiling corner, secured with small white cable clips against orange-tan walls.
Fiber optic cabling throughout the house.

Main Network Rack Contents

Here’s a photo of the rack as it looks today. Every piece of equipment shown in the photo below was involved in serving the requests your browser made to view this blog post.

Inside of a black network rack containing from top to bottom: Tripp Lite Isobar surge protector, MikroTik CCR2004 router with green activity lights, MikroTik CRS317 switch with green and blue status LEDs, and a Dell Precision 3460 small form factor workstation on a shelf with a temperature and humidity monitor beside it.
Photo of the rack as it looks today.

Rack

When designing this iteration of my home datacenter, I decided to use a proper network rack and rack-mounted equipment for the first time. I ended up going with a NavePoint 15U 600mm Depth Networking Cabinet. I chose this rack for two main reasons. First, I wanted an enclosed rack because I think they look better than open-air racks. Second, with the enclosed space it provides, I wanted to do some extra air filtering to keep dust under control.

Likes

Dislikes

Power Distribution

For electrical power distribution in the rack, I went with a Tripp Lite ISOBAR12ULTRA 12-Outlet Surge Protector. It is essentially a 1U rack-mounted power strip with surge protection. Realistically, any 1U PDU would work in this context. I wasn’t looking for any fancy capabilities like current measurement or switched outlets, so I kept things simple. That said, this PDU is more expensive than basic options because it contains premium power filtering components.

Likes

Router

For my main edge router connecting my local network to the internet, I went with a MikroTik CCR2004-1G-12S+2XS.

Front panel of the MikroTik CCR2004-1G-12S+2XS router showing two 25G SFP28 ports on the left, twelve 10G SFP+ ports in the center, and status LEDs with Cloud Core Router branding on the right.
MikroTik CCR2004 router.

I’ll be honest: this router is incredibly overkill for my network. I currently get a 1 Gbit/s connection from one ISP. Meanwhile, this router has two 25 Gbit/s ports and twelve 10 Gbit/s ports! This is the type of router intended to sit in a noisy datacenter somewhere doing BGP peering and exchanging traffic with several ISPs.

To a degree, the overkill nature of this router is exactly why I chose it. Every router I owned previously had bottlenecks which would prevent me from fully utilizing my ISP connection under heavy load. For this iteration of my network, I decided to overspec all the hardware to ensure it would never be the limiting factor. This router definitely meets that requirement. As an added bonus, my router is fully prepared and capable of handling speed upgrades in the future, such as Verizon’s ongoing rollout of 2 Gbit/s connections.

There is a second, admittedly more superficial benefit: It has nearly identical aesthetics to the switch mounted below it. As a result, my router and switch look like a matched set.

Likes

Dislikes

Switch

For my core switch, I went with a MikroTik CRS317-1G-16S+RM.

Front panel of the MikroTik CRS317-1G-16S+RM switch showing sixteen 10G SFP+ ports, one RJ45 management port, status LEDs, and Cloud Router Switch branding.
MikroTik CRS317 switch.

This switch is well suited for my needs, and it was easy to justify. As I have gotten into content creation through live streaming on Owncast and publishing on PeerTube, I am now regularly transferring very large video files around my network for recording, transcoding, and archival storage. While this can all be done over regular 1 Gbit/s connections, 10 Gbit/s connections make a real difference.

Likes

Dislikes

The final piece of equipment in the main rack is my main server, which I’ll cover in its own section given its complexity.

Main Server

The main server hosts nearly all of my production workloads, including this website.

Hardware

I had several constraints when choosing hardware for this server. It needed to fit in the rack, which eliminated most rack-mount servers given the cabinet’s 600mm depth. I wanted a prebuilt system since my previous home-built server had funky edge cases I wasn’t a fan of. I wanted all-SSD storage given the performance limitations of hard drives, and I needed 10 Gbit/s networking.

The Minisforum MS-01 caught my attention during my research. However, I ultimately wanted something from a more established brand. I also decided I required ECC memory, which the MS-01 doesn’t support.

ECC can be a controversial topic in homelab circles, with many dismissing it as only needed for enterprise workloads. While that’s probably true in most cases, perspective matters. This server hosts workloads which are mission-critical to me personally. I need it to work all the time. Dell ended up being my top pick because I have tons of experience working with them from my day job and am generally satisfied with their reliability and support. Dell uses Intel almost exclusively, and Intel historically did not include ECC support in their non-Xeon processors. However, that changed in 12th generation processors when paired with the right chipset. Dell Precision (now Dell Pro Max) computers with certain Intel chips have full support for ECC.

With all these requirements considered, the Dell Precision 3460 Small Form Factor Workstation ended up being a perfect match.

Dell Precision 3460 Small Form Factor workstation, a compact black tower with mesh ventilation, Dell logo, and front panel I/O ports including USB and audio.
Dell Precision 3460 Small Form Factor Workstation.

These are the main components which form this server:

Dislikes

Software

My main server runs the Proxmox Virtual Environment. This is my first server to use a hypervisor as its base. All my previous servers used regular Linux installations with packages installed directly or services running in containers.

I run a number of services, including Caddy, Gitea, Owncast, and Synapse. Each service runs in its own dedicated VM with Debian, a regular ext4 filesystem, and Docker. This one-service-per-VM approach provides strong isolation between services. If one service has an issue or needs to be updated, it doesn’t affect the others. Slightly more complex services which require an external database, like Synapse, are grouped in the same VM.

For storage, the Precision 3460 only has three M.2 slots with almost no options for expandability, so I needed to make every slot count. Proxmox has built-in support for ZFS and can boot directly from member drives in the array. I configured the three 4 TB SSDs in a RAIDZ1 array, giving me 8 TB of usable space with tolerance for one drive failure. No drive is wasted as a dedicated boot device.

Networking

Proxmox handles networking out of the box using Linux bridges, essentially virtual Ethernet switches implemented by the kernel. This means every packet sent or received by a VM needs to wake up the VMM for the bridge to process it before handing it over to the NIC. I wanted to eliminate this overhead.

One way to improve networking performance is PCIe passthrough, where a device like a NIC is handed directly to a VM. This is much more performant because the guest kernel can interact with the device without involving the host kernel, but it has a common drawback: each device can only be passed through to one VM at a time. My Mellanox ConnectX-4 Lx has two ports, exposing itself as two PCIe devices. Out of the box, I could only pass through each port to one VM, two VMs total. The host wouldn’t be able to use it and no other VMs would have network access.

Thankfully, enterprise adapters like the ConnectX-4 Lx have a feature to address exactly this problem: SR-IOV. This feature lets the hardware create additional virtual PCIe devices that share the underlying physical adapter. I added a startup script to Proxmox which configures each port to expose 16 SR-IOV devices. Every VM gets one of these virtual devices passed through, giving it a direct connection to the hardware without any virtual switching.

Comparison diagram showing network virtualization with and without SR-IOV. Left side shows three VMs routing through a Layer 2 vswitch in the VMM before reaching the NIC. Right side shows three VMs each with a VF Driver connecting directly to dedicated virtual functions on the NIC's embedded bridge, bypassing the VMM.
Diagram of network flows with and without SR-IOV. (Source: Fariss Omar)

With direct hardware access, VMs can take advantage of features like hardware checksumming and TCP/UDP GRO that would otherwise be abstracted away by the Linux bridge and VirtIO driver. This makes it easy for a VM to fully utilize the 10 Gbit/s link. VM-to-VM communication is even more interesting: the NIC has an embedded switch, so traffic between VMs bypasses the upstream switch and host OS entirely. I was able to achieve around 20 Gbit/s in testing with no added load on the host kernel.

Hypervisor 01

In addition to my main server, I have a secondary server in my basement for testing and client workloads like Fireside Fedi.

Hardware

Hypervisor 01 is a home-built server using the following components:

This server was actually my previous main server, and it served that role very well for years hosting the same workloads my current main server handles today. The HDDs are a remnant of that era, specced before I decided to go all-SSD with the Dell Precision. When I upgraded, I repurposed this machine for more flexible workloads, including hosting services for clients and running experiments in isolation from everything else.

The NVIDIA T400 handles video transcoding for my Managed Owncast Hosting clients. The Ryzen 5 3600 lacks an integrated GPU, and its aging cores struggle with software transcoding on their own. Adding the virtualization layer on top makes CPU-based transcoding even less viable, so a dedicated GPU was essentially a requirement.

Likes

Dislikes

Software

Hypervisor 01 is set up very similarly to my main server. It runs Proxmox with ZFS for storage: the two NVMe SSDs are configured in a mirror for the boot pool, while the six HDDs are in a RAIDZ2 array used for internal backups from the main server. The Mellanox NIC is configured with SR-IOV, and the T400 is passed through to a VM hosting the Owncast instances for my clients.

The main difference is in how the VMs are configured. Instead of Debian with Docker, the VMs on Hypervisor 01 run Arch Linux with services managed directly by systemd units.

Future Goals

My main goal for 2026 is to migrate my main server from Proxmox to a regular installation of Arch Linux. While I’ve learned a tremendous amount from using Proxmox, it’s not the right tool for my main server’s use case. I don’t run any workloads that benefit from full virtualization, like running Windows alongside Linux. All my services run Linux and operate at the same trust level, so the security isolation VMs provide doesn’t add much value. Without that benefit, I’m just paying the cost of context switching between virtual machines. Instead, I plan to use systemd units, which will be far more efficient for my workloads. Hypervisor 01 has served as a proving ground for this approach, giving me the opportunity to work out the details before committing to it on my main server. That said, Hypervisor 01 will continue running Proxmox, where the management features and stronger isolation are a better fit for client workloads.

Looking further out, I have a few longer-term goals. For the physical infrastructure, I’d like to add solar power to offset the grid consumption of my datacenter and a generator to keep critical services running during extended power outages. On the software side, I want to build out proper network monitoring and observability. I’ve toyed with Grafana in the past and would like to properly roll out metrics collection and alerting.

Final Thoughts

Building and maintaining a home datacenter has been one of the most rewarding ongoing projects in my life. What started with a hand-me-down Pentium 4 machine running a Minecraft server has evolved into a production environment hosting services for myself and others. Along the way, I’ve learned far more than I ever expected about networking, storage, virtualization, and the countless small details that make systems reliable.

The best infrastructure is the one that meets your actual needs. It’s easy to get caught up in what others are running or what seems “correct” based on enterprise best practices. But a home datacenter is personal. My priorities (ECC memory, SR-IOV networking, solid-state storage) reflect my specific workloads and tolerance for risk. Yours will likely look different, and that’s exactly how it should be.

For anyone considering their first steps into self-hosting: you don’t need enterprise hardware or a full rack to get started. A single machine running a few services will teach you more than any amount of research.