Building a home server with gamer-style components

I’ve been running small servers as virtual machines on my main PC for quite some time, starting as far back as 2008. I expanded from a single virtual machine to several around 2014 and began designing a new home network of machines dedicated to function or role. That same year is when I built my then-new gaming rig, with 64 GB of RAM, as well as two 1TB SSDs, so I could play games while running all of those virtual machines in the background. This wasn’t scalable, however. I quickly started running into some issues.

The first issue I started having was my gaming rig no longer could have downtime. If a game I was playing caused a bluescreen, all of my home servers would go down. Or if a server started utilizing its SSD to its fullest, it would steal power from other servers trying to use the same SSD.

I found a way to balance the problem by purchasing a couple of Intel NUCs, model NUC7i7BNH. I put a Samsung M.2 970 Pro 512GB SSD coupled with a WD Black 1TB 7200RPM HDD and 32GB of RAM into both NUCs. Using a headless Fedora Linux install, I was able to run VirtualBox on both with ease, allowing me to migrate my virtual machines to the NUCs. The sacrifice in CPU power going from my gaming PC’s i7-5930k to the mobile i7-7567U wasn’t that noticeable, except for one virtual machine in particular. My Plex server.

The Plex server that I was running as a virtual machine on my gaming PC, with direct access to physical disks for storage, was working great. But when I moved it to the NUC, it would completely eat the NUC’s performance, and wouldn’t play content all too well either. I needed to find my Plex server a new home off of my gaming PC but it couldn’t be the NUC.

For most people, a NUC will probably be fine to host a Plex server all on its own if that’s all the NUC is doing. Unfortunately, I’m not so lucky. I have quite a large library (9 TB and growing) and that requires some hefty disks that I can’t attach to the NUC easily; sure, I could use USB 3.0, and externally attach the 2x 5TB disks I have now, and sure enough I tried that, but I had issues with the disks unmounting at random, which you could imagine causes issues for Plex or any application trying to use those disks.

And so began the research into what kind of requirements the new machine will need to have to be able to run Plex smoothly, while allowing the library to grow in size past the 9 TB it’s already at.

Looking into what Plex requires for hardware video transcoding was simple. Plex published an article about it. An Intel CPU with Intel Quick Sync or an NVIDIA GPU is required for Plex to have the hardware transcoding capability, however the article also points out it requries a Plex Pass subscription. So I purchased a lifetime Plex Pass subscription, since let’s face it, I’ll have Plex around for some time to come, and it allows me to do poweruser-esque things; it’s a no-brainer. I then set forth looking at current generation Intel CPUs.

I stumbled upon the i7-8700k, which with Intel Quick Sync support, looks to be ideal. The next part was finding out what kind of RAM I needed. Doing some research on RAM was a little less trivial, but I found a benchmark analysis that said an i7-8700k has a sweet spot with DDR4-3200 RAM, so I looked for that with the lowest cas latency, and made my way.

I then began asking myself: do I really need an NVIDIA GPU as well? That led me to further research into how Plex video transcoding works. I discovered that Plex can’t do video transcoding inside a VM which seems to be the reason the NUC didn’t work out for me. Though I had tried Plex on the NUC directly, without a VM, and still had poor performance. It was good to know that I needed to ditch the VM if I wanted my Plex server to have optimal performance. I was on the right track. I discovered a forum thread discussing which NVIDIA GPUs have HEVC encoding support, to which the answer was simply “Any ‘GTX’ class 600,700,900, or 1000 series NVIDIA GPU.” A bit further down and you’ll find which model GTXes have certain feature sets for video transcoding, directly correlating to the generation of the GPU. There are some cheap NVIDIA GTX 1050s out there for around $150 USD at time of writing, which would allow my Plex server to be capable of hardware transcoding HEVC formatted streams at up to 8k resolution. With all of that in mind, I decided I may try this route later on if I need an extra push of transcoding power than what the Intel Quick Sync was going to give me. I decided not to purchase a GPU with my new server.

Video transcoding will generate some heat on whatever hardware component is performing the job. Since I want this server to be online 24/7/365, with the possibility of having 8 or more streams going at once, and all of them may require transcoding, I decided the Intel CPU would need some decent cooling. I want to avoid thermal throttling and I want the CPU to have plenty of horsepower to do the task of transcoding. I’ve never liquid cooled before, but I wanted to try it out. I decided to get an AIO liquid cooler for the Intel i7-8700k, following tips from Paul’s Hardware and Bitwit on YouTube for best practices.

Anvil Server Component Close-up

After ordering all of the parts and building the new server, I then began testing the hardware. I ran a live Linux OS and began running benchmarks. I was impressed to say the least. The CPU never got too hot, even after running benchmarks for 15 minutes, I never saw the CPU go over 45ºC. All of the hardware was performing better than expected. I was ready to install the operating system and continue forward.

I’ve been planning a server build like this for a while. One question that everyone always asked me was what OS would I choose to run on it? A few choices I had considered were CentOS, Fedora, and even FreeNAS. I had been telling everyone that I would likely run CentOS 7 on it. Well, I tried that out for a couple of days, and figured out pretty quickly that CentOS 7 is not the most ideal for what I’m doing. I ended up having issues with repositories having packages too old for my use case, and I would have had to resort to installing random third-party repositories to get what I wanted, some of whom I didn’t trust nor wanted to build trust with. So I reinstalled and went with Fedora 29, which has newer packages and is more comfortable to me. I didn’t run into the issues I was running into with CentOS 7. The biggest problem there was Deluge, Fedora has Deluge in their repositories but CentOS doesn’t.

After getting software installed and everything setup the way I wanted it, I then began transferring over the Plex library. I moved the physical disks to the new server and copied the cache, databases, and thumbnails from the VM to the new server. Plex started right up in its new home. The migration went better than I had expected. At this point, I was hours into the task of migrating everything, so if something didn’t work, I would have likely had to spend more hours getting things running. I was glad everything went smooth.

And now? My Plex server has no performance bottlenecks that I can find. It handles people streaming from it at all hours of the day. Deluge got a performance increase too. I am very happy with how the build turned out.

Next steps: installing a RAID array of hard disks and migrating the library over to that. Currently, the Plex library sits on 2x 5TB WD Black drives in striped LVM. There’s no redundancy–if a drive fails, the library is gone. That’s a risk I’m taking right now and one I will rectify in due time.

Here’s the PCPartPicker list: Home Hypervisor & Plex Server

How gnome-software/systemd software upgrade dropped me to the grub command line, and how I resolved it

Earlier I was messing around in one of my Fedora Linux virtual machines. I’ve had this particular install since Fedora 23 was the latest release. I’ve upgraded it from 23 to 24 and then again from 24 to 25. I completed the upgrade a few weeks back and decided to come back to it. Of course, since it had been a few weeks, after I loaded it up and logged in, GNOME was telling me there were updates available in the Software (gnome-software) application.

Brief warning: this article contains a ton of screenshots as I worked the problem.

Mindlessly, I opened the gnome-software utility and decided to update the system through there. The updates looked pretty benign, whatever. I clicked ‘Restart & Install’ and confirmed. The system rebooted and brought me to the systemd installing updates prompt, awesome. It finished and rebooted once more, however I was immediately dumped to a grub command line. Uh oh.

So I took the #fedora IRC channel on Freenode, explained what had happened, and got some interesting feedback.

First, we tried to identify the problem. We started by trying to figure out why grub command line was coming up immediately while booting rather than the grub boot menu. I found that the grub config file was in fact empty, pretty straightforward then why we’re met with the grub command line.

My initial thought is the gnome-software utility told systemd to reboot for updates, but systemd didn’t install updates correctly. There was probably a kernel update and the grub config got caught in the mess. A little more trial and error with the fellas in #fedora and I was able to tell grub to boot the vmlinuz and initramfs files still visibly present.

Booting didn’t work though. A kernel panic came up while booting, Linux couldn’t mount the root filesystem.

Now I’m wondering if my hard disk is somehow corrupted after the updates. Obviously an update didn’t finish correctly, that much is obvious.

I head back to the #fedora IRC channel and suggest that I try booting the Live CD iso instead to dig around any further, I was tired of messing around in grub’s command line. They agreed this would in fact aid diagnosing the issue, so off I went.

I identified that the hard disk is using a single LVM partition, there are no other partitions on the Master Boot Record (MBR) of the disk. And the LVM partition only has a single logical volume in it called root with the / mount point and is xfs formatted. Pretty strange, and I don’t remember why I chose this layout so long ago.

I decide to remove the Live CD iso from the machine and reboot. What happened next though was pretty weird. I had looked away for a bit to check the IRC channel, when I came back, the machine had booted!

What the hell. Alright, so might as well browse around then. I logged in. I even was greeted with the message that updates were installed successfully!

I decided to open Terminal and verify the disk layout:

Yup. That’s a lvm-xfs partitioned disk. There is in fact no /boot partition, which means the single / partition is the /boot partition. By this point, the people over in #fedora were pretty grateful grub has matured to be able to understand booting LVM partitions, but they were at a loss for words for what was going on as well.

After talking with the #fedora IRC channel some more, I agree to figure out what state causes the machine to drop to the grub command line after upgrading.

I began going through each variable in my tests:

  • Default partitioning format
  • Custom partitioning format
  • Upgrading system using gnome-software
  • Upgrading system using dnf

The default partitioning format is just as you would expect–you load up the Live CD iso with a blank hard disk, you install Fedora to that hard disk using the default partitioning that anaconda chooses for you. No modifications.

The custom partitioning format is a bit different. You start with a single LVM partition and then you create an xfs partitioned with the / mount point:

I and the people in the #fedora IRC both found that installing Fedora with the custom partitioning format and upgrading using gnome-software would cause Fedora to be dropped to the grub command line every time, and only repairable by booting the Live CD iso again and mounting the filesystem so that the xfs journal is replayed after the updates.

Getting to that diagnosis though took several hours, and after confirming with others I opened the bug report over at Red Hat: Bug 1416650 – Upgrading using gnome-software/systemd with lvm-xfs custom partitioning format causes grub boot failure

Hopefully I get an answer back from the Fedora and/or Red Hat teams. Quite a head scratcher at first, but obviously a bug since it’s supposed to be supported and doesn’t indicate otherwise.

Cheers.