Home Datacenter Tour (2025)
Table of Contents
Introduction
I can trace exactly what sparked my lifelong interest in self-hosting and Linux. Back in middle school, my friends and I wanted to play Minecraft together. While there were paid options for letting someone else handle it, I asked my dad and he provided me with what became my first home server: an HP Compaq DC7600 with a Pentium 4 processor, 4 GB of RAM, and a 160 GB hard drive. He installed Ubuntu Server on it and gave me some general tips to get started. That computer enabled me to teach myself so much about Linux.
To this day, I still self-host nearly everything and learn best through hands-on experimentation. Unfortunately, I don’t have that machine anymore, nor most of the other previous home servers I used during the rest of my childhood and early career. However, my setup and knowledge have evolved significantly since then. I’m particularly drawn to reading about how other people establish their home infrastructure, whether it’s a simple home server, a tinkering lab, or a full datacenter. There is no single correct approach to self-hosting, so every setup tells a unique story of its designer’s priorities, constraints, preferences, and evolution.
This post is a snapshot in time of my hardware and software choices, documenting what has worked, what hasn’t, and the reasoning behind my decisions. While I’m publishing this a little late, it captures what my setup looked like by the end of 2025. In fact, I’m already preparing to migrate my main server to Arch Linux as I write this. If I waited for things to settle before publishing, this post would never see the light of day since home infrastructure is always evolving. I’m writing this both as a reference for my future self and for others who might be considering their first steps into self-hosting.
Before diving in, it’s worth clarifying what I mean by home datacenter. The terms “home server”, “home lab”, and “home datacenter” are often used interchangeably in the self-hosting community, but I make a distinction. These days, I consider my network to be a home datacenter. While the scale is modest compared to many home datacenters you can find on Reddit, I view it as a step above a lab because my network runs production workloads for me and others. I believe this distinction matters because I have less room to tinker and experiment. If I break something, it affects services which people depend on. This production focus influences how I design everything: hardware choices, software configurations, and deployment practices. While I can’t match enterprise-level high availability, this mindset shapes my entire approach.
Network Architecture
Let’s start with a network diagram of my whole house.

My connection to the internet is provided by Verizon Business. It uses the exact same underlying infrastructure as their residential service, but comes with static IP addresses and a better SLA. This is admittedly overkill for most home users, but I felt it was justified given the production workloads I host. The connection arrives at the core of my network: the main network rack. This rack houses my edge router, core switch, and main server. The edge router handles routing between my local network and the internet, while my core switch connects all the devices around the network. The main server hosts all of my personal services, including this website you’re viewing right now. The contents of this rack are relatively fixed and haven’t changed much since it was first set up about 2 years ago.
From the main network rack, I have a dedicated line running to my basement where I keep Hypervisor 01, a secondary server for testing and client workloads. Also in the basement is an old Dell PowerEdge T320 I keep as a spare for testing and migrations. It’s slow and power-hungry, so it is aptly named Shitbox. It usually sits offline and unused, but comes in handy at random times.
To connect everything, especially when going between rooms around the house, I primarily use fiber optic cabling. I chose fiber mostly as a learning opportunity since I had never worked with it before, but it also turned out to be much easier to run over the long distances throughout my house. Fiber handles 10 Gbit/s easily, whereas copper would require Cat6a cabling with more careful installation to achieve the same speeds. These connections serve user workstations, Wi-Fi access points, home-theater PCs, and everything else requiring network access.

Main Network Rack Contents
Here’s a photo of the rack as it looks today. Every piece of equipment shown in the photo below was involved in serving the requests your browser made to view this blog post.

Rack
When designing this iteration of my home datacenter, I decided to use a proper network rack and rack-mounted equipment for the first time. I ended up going with a NavePoint 15U 600mm Depth Networking Cabinet. I chose this rack for two main reasons. First, I wanted an enclosed rack because I think they look better than open-air racks. Second, with the enclosed space it provides, I wanted to do some extra air filtering to keep dust under control.
Likes
- Price: Enclosed racks can get very expensive very fast, but this one ended up being a fair compromise between the space it provides and the amount it costs.
Dislikes
- Loud Included Exhaust Fans: This rack includes two exhaust fans for the roof, but they are incredibly loud. I ended up separately buying and installing an AC Infinity Rack Roof Fan Kit. Those fans are much quieter and come with a speed controller to let me fine tune the noise and cooling levels. I think this is a required upgrade if you’re going to be keeping this rack in an area sensitive to noise, like a bedroom or meeting room.
- Short Depth: This is less of a dislike and more of a warning: This particular rack only supports up to 600mm deep equipment. A rack like this is intended for short-depth networking and A/V equipment. Full-size servers will require a much deeper rack.
Power Distribution
For electrical power distribution in the rack, I went with a Tripp Lite ISOBAR12ULTRA 12-Outlet Surge Protector. It is essentially a 1U rack-mounted power strip with surge protection. Realistically, any 1U PDU would work in this context. I wasn’t looking for any fancy capabilities like current measurement or switched outlets, so I kept things simple. That said, this PDU is more expensive than basic options because it contains premium power filtering components.
Likes
- Build Quality: The Isobar brand has a very well-known reputation for providing high quality surge protection equipment. That said, I am not an electrical engineer, so I can’t authoritatively say if it is better than the cheaper alternatives or not.
- Well Distributed Outlets: I like that most outlets are on the rear of the unit to keep cabling out of the front, but there are still two outlets in the front for convenience. I use them for a laptop if I need to work in front of the rack.
Router
For my main edge router connecting my local network to the internet, I went with a MikroTik CCR2004-1G-12S+2XS.

I’ll be honest: this router is incredibly overkill for my network. I currently get a 1 Gbit/s connection from one ISP. Meanwhile, this router has two 25 Gbit/s ports and twelve 10 Gbit/s ports! This is the type of router intended to sit in a noisy datacenter somewhere doing BGP peering and exchanging traffic with several ISPs.
To a degree, the overkill nature of this router is exactly why I chose it. Every router I owned previously had bottlenecks which would prevent me from fully utilizing my ISP connection under heavy load. For this iteration of my network, I decided to overspec all the hardware to ensure it would never be the limiting factor. This router definitely meets that requirement. As an added bonus, my router is fully prepared and capable of handling speed upgrades in the future, such as Verizon’s ongoing rollout of 2 Gbit/s connections.
There is a second, admittedly more superficial benefit: It has nearly identical aesthetics to the switch mounted below it. As a result, my router and switch look like a matched set.
Likes
- Compute Power: This router is incredibly capable. It barely registers any load routing the 1 Gbit/s I can throw at it, so I have tons of growing room for the future.
- RouterOS: I really like MikroTik’s RouterOS. It exposes every little fine tuning dial I want, so I have this device very finely tuned to my preferences.
- No Licensing: There is technically a one-time license cost bundled in the purchase price, but beyond that there is no licensing I need to worry about. It just works and all features are unlocked, firmware updates included.
Dislikes
- Fans Always Running: This router has a big heatsink on the back which could allow for fully passive cooling. However, despite tuning the fan settings in RouterOS, they always run at a minimum speed. This isn’t a huge problem, but it would be nice if they could turn off entirely like the fans on the switch.
Switch
For my core switch, I went with a MikroTik CRS317-1G-16S+RM.

This switch is well suited for my needs, and it was easy to justify. As I have gotten into content creation through live streaming on Owncast and publishing on PeerTube, I am now regularly transferring very large video files around my network for recording, transcoding, and archival storage. While this can all be done over regular 1 Gbit/s connections, 10 Gbit/s connections make a real difference.
Likes
- Port Count & Value: With 16 ports, this switch has more than enough to cover everything on my network. I still have 8 ports available today, and the price for it all was very reasonable.
- SFP+ Module Support: This switch does not appear to be picky about the types of SFP+ modules plugged into it. I haven’t tested with that many modules, but this is consistent with MikroTik which generally tends to not be picky.
Dislikes
- Buggy IGMP Support for IPv6: This dislike is very specific and likely more of a RouterOS issue, but I encountered a problem where if I had IGMP snooping on IPv6 enabled, my devices would fail to autoconfigure IPv6 addresses. From what I could find, this is a bug. The solution for me was simple: turn off IGMP snooping. I don’t have any major sources of multicast traffic on my network, so it isn’t a huge loss for me.
- Slow Boot Up Time: This switch takes a good 3 or 4 minutes to boot up, sometimes longer during a software update. This isn’t surprising for network gear of this type, but if I compare it to the router it is paired with, the router boots up consistently in under a minute.
- Extremely Bright Blue LEDs: This switch contains the brightest blue LEDs I have ever seen, and all they indicate is the PSUs functioning correctly. In a normal deployment where this switch would be installed in a datacenter, this is no big deal. However, in a home environment, it’s not very good for light sensitive areas like bedrooms.
The final piece of equipment in the main rack is my main server, which I’ll cover in its own section given its complexity.
Main Server
The main server hosts nearly all of my production workloads, including this website.
Hardware
I had several constraints when choosing hardware for this server. It needed to fit in the rack, which eliminated most rack-mount servers given the cabinet’s 600mm depth. I wanted a prebuilt system since my previous home-built server had funky edge cases I wasn’t a fan of. I wanted all-SSD storage given the performance limitations of hard drives, and I needed 10 Gbit/s networking.
The Minisforum MS-01 caught my attention during my research. However, I ultimately wanted something from a more established brand. I also decided I required ECC memory, which the MS-01 doesn’t support.
ECC can be a controversial topic in homelab circles, with many dismissing it as only needed for enterprise workloads. While that’s probably true in most cases, perspective matters. This server hosts workloads which are mission-critical to me personally. I need it to work all the time. Dell ended up being my top pick because I have tons of experience working with them from my day job and am generally satisfied with their reliability and support. Dell uses Intel almost exclusively, and Intel historically did not include ECC support in their non-Xeon processors. However, that changed in 12th generation processors when paired with the right chipset. Dell Precision (now Dell Pro Max) computers with certain Intel chips have full support for ECC.
With all these requirements considered, the Dell Precision 3460 Small Form Factor Workstation ended up being a perfect match.

These are the main components which form this server:
- Processor: Intel Core i7 14700 (8 P-cores, 12 E-cores)
- Memory: 96 GB (2x 48 GB) ECC DDR5-5600 SO-DIMM
- Storage: 12 TB (3x 4 TB) WD_BLACK SN850X SSD (with Heatsinks)
- Network Interface: Mellanox ConnectX-4 Lx Ethernet Adapter
Dislikes
- No IPMI: Since this computer is not built as a server, it is no surprise there are no out-of-band mechanisms to remotely manage it. If it is powered off, I need to press the power button to turn it back on. Thankfully this has never been a problem, but it is definitely something I miss from my previous main server.
- Occasional PCIe Errors: Interestingly, every once in a while, the Linux kernel will log a correctable PCIe error coming from one of the SSDs. Thankfully, it has always been correctable and never caused a problem, but this likely means either one of the SSDs isn’t seated perfectly or the motherboard has some rare edge cases where it can mess up a signal at PCIe 4.0 speeds. This is an acceptable condition for me because these errors are few and far between, but it really does emphasize the necessity of using a RAID.
Software
My main server runs the Proxmox Virtual Environment. This is my first server to use a hypervisor as its base. All my previous servers used regular Linux installations with packages installed directly or services running in containers.
I run a number of services, including Caddy, Gitea, Owncast, and Synapse. Each service runs in its own dedicated VM with Debian, a regular ext4 filesystem, and Docker. This one-service-per-VM approach provides strong isolation between services. If one service has an issue or needs to be updated, it doesn’t affect the others. Slightly more complex services which require an external database, like Synapse, are grouped in the same VM.
For storage, the Precision 3460 only has three M.2 slots with almost no options for expandability, so I needed to make every slot count. Proxmox has built-in support for ZFS and can boot directly from member drives in the array. I configured the three 4 TB SSDs in a RAIDZ1 array, giving me 8 TB of usable space with tolerance for one drive failure. No drive is wasted as a dedicated boot device.
Networking
Proxmox handles networking out of the box using Linux bridges, essentially virtual Ethernet switches implemented by the kernel. This means every packet sent or received by a VM needs to wake up the VMM for the bridge to process it before handing it over to the NIC. I wanted to eliminate this overhead.
One way to improve networking performance is PCIe passthrough, where a device like a NIC is handed directly to a VM. This is much more performant because the guest kernel can interact with the device without involving the host kernel, but it has a common drawback: each device can only be passed through to one VM at a time. My Mellanox ConnectX-4 Lx has two ports, exposing itself as two PCIe devices. Out of the box, I could only pass through each port to one VM, two VMs total. The host wouldn’t be able to use it and no other VMs would have network access.
Thankfully, enterprise adapters like the ConnectX-4 Lx have a feature to address exactly this problem: SR-IOV. This feature lets the hardware create additional virtual PCIe devices that share the underlying physical adapter. I added a startup script to Proxmox which configures each port to expose 16 SR-IOV devices. Every VM gets one of these virtual devices passed through, giving it a direct connection to the hardware without any virtual switching.

With direct hardware access, VMs can take advantage of features like hardware checksumming and TCP/UDP GRO that would otherwise be abstracted away by the Linux bridge and VirtIO driver. This makes it easy for a VM to fully utilize the 10 Gbit/s link. VM-to-VM communication is even more interesting: the NIC has an embedded switch, so traffic between VMs bypasses the upstream switch and host OS entirely. I was able to achieve around 20 Gbit/s in testing with no added load on the host kernel.
Hypervisor 01
In addition to my main server, I have a secondary server in my basement for testing and client workloads like Fireside Fedi.
Hardware
Hypervisor 01 is a home-built server using the following components:
- Processor: AMD Ryzen 5 3600 (6 cores, 12 threads)
- Motherboard: ASRock Rack X570D4U
- Memory: 32 GB (2x 16 GB) ECC DDR4-3200 DIMM
- Storage: 2x 512 GB NVMe SSD, 6x 4 TB HDD
- Graphics: NVIDIA T400
- Network Interface: Mellanox ConnectX-4 Lx Ethernet Adapter
- Case: Fractal Design Node 804
This server was actually my previous main server, and it served that role very well for years hosting the same workloads my current main server handles today. The HDDs are a remnant of that era, specced before I decided to go all-SSD with the Dell Precision. When I upgraded, I repurposed this machine for more flexible workloads, including hosting services for clients and running experiments in isolation from everything else.
The NVIDIA T400 handles video transcoding for my Managed Owncast Hosting clients. The Ryzen 5 3600 lacks an integrated GPU, and its aging cores struggle with software transcoding on their own. Adding the virtualization layer on top makes CPU-based transcoding even less viable, so a dedicated GPU was essentially a requirement.
Likes
- Case Design: The Fractal Design Node 804 is an excellent case for a compact storage server. It fits a surprising number of drives in a relatively small footprint.
- Proven Reliability: Having served as my main server previously, I know this hardware is stable and dependable.
Dislikes
- Aging Platform: The Zen 2 architecture, while still perfectly functional, is showing its age. Both AMD and Intel have released several generations of processors since then with significant improvements in performance and efficiency.
Software
Hypervisor 01 is set up very similarly to my main server. It runs Proxmox with ZFS for storage: the two NVMe SSDs are configured in a mirror for the boot pool, while the six HDDs are in a RAIDZ2 array used for internal backups from the main server. The Mellanox NIC is configured with SR-IOV, and the T400 is passed through to a VM hosting the Owncast instances for my clients.
The main difference is in how the VMs are configured. Instead of Debian with Docker, the VMs on Hypervisor 01 run Arch Linux with services managed directly by systemd units.
Future Goals
My main goal for 2026 is to migrate my main server from Proxmox to a regular installation of Arch Linux. While I’ve learned a tremendous amount from using Proxmox, it’s not the right tool for my main server’s use case. I don’t run any workloads that benefit from full virtualization, like running Windows alongside Linux. All my services run Linux and operate at the same trust level, so the security isolation VMs provide doesn’t add much value. Without that benefit, I’m just paying the cost of context switching between virtual machines. Instead, I plan to use systemd units, which will be far more efficient for my workloads. Hypervisor 01 has served as a proving ground for this approach, giving me the opportunity to work out the details before committing to it on my main server. That said, Hypervisor 01 will continue running Proxmox, where the management features and stronger isolation are a better fit for client workloads.
Looking further out, I have a few longer-term goals. For the physical infrastructure, I’d like to add solar power to offset the grid consumption of my datacenter and a generator to keep critical services running during extended power outages. On the software side, I want to build out proper network monitoring and observability. I’ve toyed with Grafana in the past and would like to properly roll out metrics collection and alerting.
Final Thoughts
Building and maintaining a home datacenter has been one of the most rewarding ongoing projects in my life. What started with a hand-me-down Pentium 4 machine running a Minecraft server has evolved into a production environment hosting services for myself and others. Along the way, I’ve learned far more than I ever expected about networking, storage, virtualization, and the countless small details that make systems reliable.
The best infrastructure is the one that meets your actual needs. It’s easy to get caught up in what others are running or what seems “correct” based on enterprise best practices. But a home datacenter is personal. My priorities (ECC memory, SR-IOV networking, solid-state storage) reflect my specific workloads and tolerance for risk. Yours will likely look different, and that’s exactly how it should be.
For anyone considering their first steps into self-hosting: you don’t need enterprise hardware or a full rack to get started. A single machine running a few services will teach you more than any amount of research.