Redirecting favicon.ico automatically

One of the overlooked web urls is the favicon.ico. To those unfamiliar, it’s the url that all web browsers and crawlers automatically try to access when viewing a website, and it’s the icon that is used for that website in user interfaces or in references to its content.

While it’s possible to tell browsers and crawlers to use a different url for a webpage’s icon using html meta tags, such as the link element in the head of a document, the favicon.ico is still used before the client knows of the correct icon url. If the favicon.ico resource does not exist, the client would receive a 404 not found error for this automatic process.

That may not seem like a problem since the client would eventually be made aware of the correct url, but consider when accessing a document that does not specify the icon url in its head, or the requested resource isn’t even html. Or perhaps the website designer made an icon that isn’t in the .ico format.

Using Nginx, it is possible to redirect the automatic requests of favicon.ico to the correct icon url. And if the html specifies its own icon, the client will still use that. The following code will do just that, anywhere, anytime.

location = /favicon.ico {
return 302 $scheme://$host/favicon.png$is_args$args;
}

In the above, an assumption has been made that the true icon exists as favicon.png, instead of in .ico format. When adding this to Nginx, be sure to change that to the correct url of the icon.

With the above in place, now web browsers and crawlers in search of a website’s icon will be able to find it, even if it doesn’t exist at favicon.ico, because they will follow the redirect to the correct place. And any document that specifies its own icon will still keep its icon.

Building a home server with gamer-style components

I’ve been running small servers as virtual machines on my main PC for quite some time, starting as far back as 2008. I expanded from a single virtual machine to several around 2014 and began designing a new home network of machines dedicated to function or role. That same year is when I built my then-new gaming rig, with 64 GB of RAM, as well as two 1TB SSDs, so I could play games while running all of those virtual machines in the background. This wasn’t scalable, however. I quickly started running into some issues.

The first issue I started having was my gaming rig no longer could have downtime. If a game I was playing caused a bluescreen, all of my home servers would go down. Or if a server started utilizing its SSD to its fullest, it would steal power from other servers trying to use the same SSD.

I found a way to balance the problem by purchasing a couple of Intel NUCs, model NUC7i7BNH. I put a Samsung M.2 970 Pro 512GB SSD coupled with a WD Black 1TB 7200RPM HDD and 32GB of RAM into both NUCs. Using a headless Fedora Linux install, I was able to run VirtualBox on both with ease, allowing me to migrate my virtual machines to the NUCs. The sacrifice in CPU power going from my gaming PC’s i7-5930k to the mobile i7-7567U wasn’t that noticeable, except for one virtual machine in particular. My Plex server.

The Plex server that I was running as a virtual machine on my gaming PC, with direct access to physical disks for storage, was working great. But when I moved it to the NUC, it would completely eat the NUC’s performance, and wouldn’t play content all too well either. I needed to find my Plex server a new home off of my gaming PC but it couldn’t be the NUC.

For most people, a NUC will probably be fine to host a Plex server all on its own if that’s all the NUC is doing. Unfortunately, I’m not so lucky. I have quite a large library (9 TB and growing) and that requires some hefty disks that I can’t attach to the NUC easily; sure, I could use USB 3.0, and externally attach the 2x 5TB disks I have now, and sure enough I tried that, but I had issues with the disks unmounting at random, which you could imagine causes issues for Plex or any application trying to use those disks.

And so began the research into what kind of requirements the new machine will need to have to be able to run Plex smoothly, while allowing the library to grow in size past the 9 TB it’s already at.

Looking into what Plex requires for hardware video transcoding was simple. Plex published an article about it. An Intel CPU with Intel Quick Sync or an NVIDIA GPU is required for Plex to have the hardware transcoding capability, however the article also points out it requries a Plex Pass subscription. So I purchased a lifetime Plex Pass subscription, since let’s face it, I’ll have Plex around for some time to come, and it allows me to do poweruser-esque things; it’s a no-brainer. I then set forth looking at current generation Intel CPUs.

I stumbled upon the i7-8700k, which with Intel Quick Sync support, looks to be ideal. The next part was finding out what kind of RAM I needed. Doing some research on RAM was a little less trivial, but I found a benchmark analysis that said an i7-8700k has a sweet spot with DDR4-3200 RAM, so I looked for that with the lowest cas latency, and made my way.

I then began asking myself: do I really need an NVIDIA GPU as well? That led me to further research into how Plex video transcoding works. I discovered that Plex can’t do video transcoding inside a VM which seems to be the reason the NUC didn’t work out for me. Though I had tried Plex on the NUC directly, without a VM, and still had poor performance. It was good to know that I needed to ditch the VM if I wanted my Plex server to have optimal performance. I was on the right track. I discovered a forum thread discussing which NVIDIA GPUs have HEVC encoding support, to which the answer was simply “Any ‘GTX’ class 600,700,900, or 1000 series NVIDIA GPU.” A bit further down and you’ll find which model GTXes have certain feature sets for video transcoding, directly correlating to the generation of the GPU. There are some cheap NVIDIA GTX 1050s out there for around $150 USD at time of writing, which would allow my Plex server to be capable of hardware transcoding HEVC formatted streams at up to 8k resolution. With all of that in mind, I decided I may try this route later on if I need an extra push of transcoding power than what the Intel Quick Sync was going to give me. I decided not to purchase a GPU with my new server.

Video transcoding will generate some heat on whatever hardware component is performing the job. Since I want this server to be online 24/7/365, with the possibility of having 8 or more streams going at once, and all of them may require transcoding, I decided the Intel CPU would need some decent cooling. I want to avoid thermal throttling and I want the CPU to have plenty of horsepower to do the task of transcoding. I’ve never liquid cooled before, but I wanted to try it out. I decided to get an AIO liquid cooler for the Intel i7-8700k, following tips from Paul’s Hardware and Bitwit on YouTube for best practices.

Anvil Server Component Close-up

After ordering all of the parts and building the new server, I then began testing the hardware. I ran a live Linux OS and began running benchmarks. I was impressed to say the least. The CPU never got too hot, even after running benchmarks for 15 minutes, I never saw the CPU go over 45ºC. All of the hardware was performing better than expected. I was ready to install the operating system and continue forward.

I’ve been planning a server build like this for a while. One question that everyone always asked me was what OS would I choose to run on it? A few choices I had considered were CentOS, Fedora, and even FreeNAS. I had been telling everyone that I would likely run CentOS 7 on it. Well, I tried that out for a couple of days, and figured out pretty quickly that CentOS 7 is not the most ideal for what I’m doing. I ended up having issues with repositories having packages too old for my use case, and I would have had to resort to installing random third-party repositories to get what I wanted, some of whom I didn’t trust nor wanted to build trust with. So I reinstalled and went with Fedora 29, which has newer packages and is more comfortable to me. I didn’t run into the issues I was running into with CentOS 7. The biggest problem there was Deluge, Fedora has Deluge in their repositories but CentOS doesn’t.

After getting software installed and everything setup the way I wanted it, I then began transferring over the Plex library. I moved the physical disks to the new server and copied the cache, databases, and thumbnails from the VM to the new server. Plex started right up in its new home. The migration went better than I had expected. At this point, I was hours into the task of migrating everything, so if something didn’t work, I would have likely had to spend more hours getting things running. I was glad everything went smooth.

And now? My Plex server has no performance bottlenecks that I can find. It handles people streaming from it at all hours of the day. Deluge got a performance increase too. I am very happy with how the build turned out.

Next steps: installing a RAID array of hard disks and migrating the library over to that. Currently, the Plex library sits on 2x 5TB WD Black drives in striped LVM. There’s no redundancy–if a drive fails, the library is gone. That’s a risk I’m taking right now and one I will rectify in due time.

Here’s the PCPartPicker list: Home Hypervisor & Plex Server

My gaming rig

A bit of story.

In 2012, my brother decided as a gift for going away to college, he would purchase PC parts for me to build my own PC from. There was a $500 combo deal upgrade package from Newegg that he found and made mine. I grew upon that initial build as time went on, adding and replacing parts with newer and better parts, until finally its last build. In 2014, I had better income and could afford to upgrade to higher end parts, so I donated that PC to my sister and built my current gaming rig, which has also since been upgraded over these past 4 years.

The current gaming rig I have started out with much less storage space, a different set of monitors, a different mousepad, video card, and webcam. All of these have been upgraded over time.

In addition, I’ve now added an Intel NUC to my mix of networked devices, which I’ve been using as a utility server. I’ve offloaded nearly all of the virtual machines that my gaming rig has been hosting for 4+ years over to the Intel NUC. This has changed my PC’s performance and resource usage in a favorable direction, and it also means I should be able to take my PC offline more often.

There is one problem I want to tackle, the 2x 5 TB WD Black drives are actually unused. They were originally given exclusive access to a guest virtual machine running my Plex server, and they were configured with LVM, creating a virtual volume of 10 TB. The performance wasn’t that great, however. I transferred the contents of them to a single 6 TB HGST drive and had equal performance with less headache. I want to RAID 1 the 5 TB WD Black drives and then possibly use them for general purpose backup storage, rather than exclusively with my Plex server.

And as for the Plex server, I want to move it off my gaming rig. I’m looking at short-term and long-term right now. In the short, I’m expanding the RAM on my Intel NUC from 16GB to 32GB, so I can host more guests on it, which would allow me to transfer my Plex server. I should then be able to move the HGST 6TB drive to the NUC since the drive is using an external USB3.0 interface. In the long-term, I want to purchase a dedicated storage server, host it at a colocation, and use a RAID 10 array of at least 6×4 TB or 8×6 TB enterprise drives. This would then become the permanent home for my Plex server, giving it the maximum efficiency I can offer.

In the long-term for my gaming rig, I do foresee a possible shift from Intel to AMD. The Ryzen processors have intrigued me. This would mean a new motherboard, which means adopting a new architecture platform. And I’ve never used an AMD processor as my main machine before, either. There’s comfort level hurdles to overcome in that direction. I also won’t need 64 GB of RAM either, since I’m moving the virtual machines to my Intel NUC, so if I build a new desktop system in the future I’ll probably split them to 32 GB. It’s unfortunate that the memory modules between my gaming rig and the NUC aren’t compatible, in addition the NUC has an upper limit of 32 GB memory. Sadness. Maybe I should purchase a second NUC? We’ll see.