Redirecting favicon.ico automatically

One of the overlooked web urls is the favicon.ico. To those unfamiliar, it’s the url that all web browsers and crawlers automatically try to access when viewing a website, and it’s the icon that is used for that website in user interfaces or in references to its content.

While it’s possible to tell browsers and crawlers to use a different url for a webpage’s icon using html meta tags, such as the link element in the head of a document, the favicon.ico is still used before the client knows of the correct icon url. If the favicon.ico resource does not exist, the client would receive a 404 not found error for this automatic process.

That may not seem like a problem since the client would eventually be made aware of the correct url, but consider when accessing a document that does not specify the icon url in its head, or the requested resource isn’t even html. Or perhaps the website designer made an icon that isn’t in the .ico format.

Using Nginx, it is possible to redirect the automatic requests of favicon.ico to the correct icon url. And if the html specifies its own icon, the client will still use that. The following code will do just that, anywhere, anytime.

location = /favicon.ico {
return 302 $scheme://$host/favicon.png$is_args$args;
}

In the above, an assumption has been made that the true icon exists as favicon.png, instead of in .ico format. When adding this to Nginx, be sure to change that to the correct url of the icon.

With the above in place, now web browsers and crawlers in search of a website’s icon will be able to find it, even if it doesn’t exist at favicon.ico, because they will follow the redirect to the correct place. And any document that specifies its own icon will still keep its icon.

My Linux Cheatsheet

One of the best tools I have in my arsenal of knowledge about Linux and working with *nix systems, is my cheatsheet.

It’s been curated over many years and I’ve memorized most of it. On occasion, I still need to reference it for one thing or another, either for hobbies or while at work professionally. It’s also been the topic of discussion a few times with my peers.

What is a Linux Cheatsheet?

Put simply, a Linux Cheatsheet is a file that lists a bunch of different quick commands that may be helpful tools to aid in the user performing a task on Linux.

One example, is that Linux from the command-line, doesn’t make it easy to search and replace a string in a bunch of files recursively. Doing so is useful if one is working inside of a project repository and needs to update all places in code where a certain word or phrase is used. This is where a cheatsheet can be helpful. The cheatsheet provides the quick command that does just that, without having to go and look it up on the Internet and getting a variety of results that may not even be what is exactly needed.

The Cheatsheet

My version of a Linux cheatsheet exists on a Gist webpage on my GitHub account. It’s publicly available: https://git.io/fhp8N

Conclusion

So the next time you need a quick command, search no more. Place it on a cheatsheet. It’ll save you time in productivity the next time you need it.

Why everyone should be boycotting Amazon

Amazon’s employees have to work in sweatshop conditions, being unable to take restroom breaks or sick leave without fear of reprimand or termination. Amazon does not pay a cent in federal taxes, taking from its communities with no financial return to its communities. Amazon’s privacy issues include selling email addresses to advertisers when a user creates an account, and opting-out does not stop advertisers from continuing to spam the user’s email address. And finally, Amazon has issues with account lockouts and cancelled orders, even if the owner of the account has authorized it over phone support. Everyone should be boycotting Amazon.

There are a few reasons I boycott Amazon when and where I can. Stay with me here and you’ll find out why you should too.

Account lockout issues

Let’s go back to 2018 for a moment. I made an order in the evening, went to sleep for the night, and the next morning the order was cancelled. I tried to place the order again over the weekend, and instead my account became locked for “unauthorized activity” and again the order was cancelled. So I call Amazon Support inquiring what’s going on, proving my identity and that I own my account, they explain that my account was locked for the unauthorized activity, which I explain was completely and utterly authorized. They unlock my account and place my order again with an internal note saying that the order was authorized by phone support with customer. I receive a tracking email for the new order. The next day, the order was cancelled again and when I looked up the order, it says that it was cancelled because of unauthorized activity. So, what this leads me to believe, is that Amazon Support’s internal note won’t protect an order from being cancelled in this scenario. The entire time this was going on, I was using the same IP address that I have used to access my account for the past several months, and I was not using a VPN or IP-changing services. While this issue with Amazon was underway, I had made a separate order at Newegg, and that order had already began shipping before I could even get my Amazon order to complete properly.

An issue with my account being locked out and several attempts at ordering going amiss, while ordering at another e-commerce website works without issue, that alone would leave a sour opinion for I believe almost anyone, but perhaps not enough to cause a complete boycott. No, there are in fact many other reasons.

Political issues

  • Employees skip bathroom breaks to keep their jobs, as reported by The Verge and The Seattle Times. They fear not meeting their quotas in fulfillment centers and then Amazon leveraging that to reprimand or terminate them. Restroom breaks take a considerable amount of time for an employee, because employees have to walk a far distance to find a restroom. If an employee falls ill and visits the doctor, a doctor’s note won’t prevent a meeting about misconduct for being absent from work. All the while, Amazon denies this is happening, despite employees going public about the issues.
  • There are past employees who describe the Amazon Warehouse as a sweatshop on Glassdoor.
  • Amazon will pay nothing in taxes despite billions in profits, as reported by Fortune. This has been sparking outrage by politicians, including Bernie Sanders and Donald Trump. Amazon was going to build its new headquarters in New York, until New Yorkers learned of the tax cuts Amazon would receive, sparking further outrage. That outrage led Amazon to back out of their deal to build their headquarters in New York. Furthermore, there is an ongoing movement to charge Amazon for past-due sales tax. When Texas sent Amazon a large tax bill, Amazon argued that Texas had no right to charge them that tax bill, causing Amazon to close their Texas-based warehouses and cancel plans to build more. What does this all mean? Less funding for communities, cities, and by extension, consumers. It’s predatory. Amazon takes from its communities but does not reinvest in those communities through taxes.
  • Traditional retailers are losing business to the mega-corporation, causing those retailers to either close shop or accept lower profits. When a retailer closes, those customers have to shop elsewhere. By taking out the smaller businesses, Amazon becomes larger and more difficult to compete with. It would be a surprise to no one to find out Amazon would become a monopoly at some point in the near future, at the direction this agenda is going.

Perhaps this grim and doom is a bit too serious. Let’s take a step back. I mean, in the United States, employees are free to pick and choose which business they work for. There may be circumstances that prevent an individual from relocating to better jobs or taking higher paying jobs. There will always be work available in every city or small town. If you take this stance, it makes Amazon appear less evil, less predatory, and more free market. Ignoring Amazon and picking to work somewhere else that treats you better won’t fix the long-term issues, however. Amazon will continue to pay nothing in taxes. Amazon will continue to bully-out smaller businesses, they will eat the competition and to make it more difficult to compete as time progresses.

By not investing in the local communities Amazon takes from, they are securing a future where communities are worse off than if Amazon would never have entered their community in the first place.

Privacy issues

Several years ago, I noticed that the email address I registered with Amazon.com was being sold to Chinese spammers. I know this because it was a unique email address I only used in one place: Amazon.com.

The only way a spammer could have knowledge of that email address I used with Amazon.com would have been if Amazon gave it to them, or in a much less likely scenario, the email address itself was “hacked” by having its information stolen from the email provider or stolen while an email message was in transit through the Internet. I like to believe it’s the much more likely scenario. It was given to spammers by Amazon themselves.

Once this email address was in the hands of the Chinese spammers, it has never seen a week without spam. It now receives requests to review Amazon products, with 1000+ pixel wide/tall images embedded, and loose “Engrish” sentences/paragraphs. There are no unsubscribe links, or when there are, they simply go to another email address that is obviously not legitimate, or they go to a dead or non-functioning webpage. And who knows if clicking that link and proceeding with its instructions will even actually unsubscribe the email address, or if it actually subscribes it to more spam. These websites are on the most obscure hosting providers, ranging from African to Asian countries. It’s impossible to stop this kind of spam because these hosting providers simply do not care about the hundreds of thousands of abuse reports they receive. They will do nothing. You may report “up the chain” all the way up to IANA (Internet Assigned Numbers Authority), by reporting their IP address space for abuse, but that will not guarantee an end. The email messages are several kilobytes in size due to the embedded images, which causes some abuse report forms to break the email message into parts, causing additional confusion in the abuse report, or at worst, inaction due to incomplete data (I’m starring at you, Amazon AWS. Their abuse form has a 4KB limit that you can’t even use all 4KB of without receiving an error. Many email headers are over 2KB in size alone).

Many months after that email address was discovered by Chinese spammers scouring Amazon.com for email addresses they can spam to, I discovered Amazon.com has a user profile page that allows you to publish your email address. I never asked to publish my email address when I created my account, it was already up there, in the public, for anyone to view. I quickly put a stop to that, but that was definitely not an opt-in feature, it was an opt-out feature, which is ridiculous, and it was too late. When you create an Amazon account, your email address is automatically displayed in cleartext to anyone who wishes to spam you, and you have to take action yourself to hide it, but of course they don’t tell you that unless you read the Terms of Service.

What solution could be put in place to prevent blatant abuse of privacy? As it turns out, the EU already has strong privacy protection laws. In 2018, they passed the General Data Protection Regulation (GDPR) law, requiring any company or entity storing data in the EU or for EU citizens to protect that data, as well as explain in plain terms, not legalese, to the user, how they plan to use that data, and when that data is disclosed. Any company found violating the law would be forced to pay a large fine up to 1% of their profits. For a multi-million dollar company, that is an expensive fine to pay, and it’s part of why the GDPR is strong regulation.

The United States could follow the EU’s decision and adopt a law like the GDPR. This would require Amazon to begin protecting user privacy better than they do now, and if they don’t, Amazon would be forced to pay large fines until they do. GDPR is what we need in the United States to prevent companies like Facebook from abusing privacy so openly, by selling user data to advertisers, without the user being made aware (again, without legalese) about the intention. Though that issue with Facebook is out of scope for this article, so I won’t go into it.

Boycott Amazon

These reasons above are why I wholeheartedly feel everyone should be boycotting Amazon at every street corner, online, everywhere, anytime.

But hey, this is all just my opinion, with some cited facts. You’re welcome to your opinion.

Building a home server with gamer-style components

I’ve been running small servers as virtual machines on my main PC for quite some time, starting as far back as 2008. I expanded from a single virtual machine to several around 2014 and began designing a new home network of machines dedicated to function or role. That same year is when I built my then-new gaming rig, with 64 GB of RAM, as well as two 1TB SSDs, so I could play games while running all of those virtual machines in the background. This wasn’t scalable, however. I quickly started running into some issues.

The first issue I started having was my gaming rig no longer could have downtime. If a game I was playing caused a bluescreen, all of my home servers would go down. Or if a server started utilizing its SSD to its fullest, it would steal power from other servers trying to use the same SSD.

I found a way to balance the problem by purchasing a couple of Intel NUCs, model NUC7i7BNH. I put a Samsung M.2 970 Pro 512GB SSD coupled with a WD Black 1TB 7200RPM HDD and 32GB of RAM into both NUCs. Using a headless Fedora Linux install, I was able to run VirtualBox on both with ease, allowing me to migrate my virtual machines to the NUCs. The sacrifice in CPU power going from my gaming PC’s i7-5930k to the mobile i7-7567U wasn’t that noticeable, except for one virtual machine in particular. My Plex server.

The Plex server that I was running as a virtual machine on my gaming PC, with direct access to physical disks for storage, was working great. But when I moved it to the NUC, it would completely eat the NUC’s performance, and wouldn’t play content all too well either. I needed to find my Plex server a new home off of my gaming PC but it couldn’t be the NUC.

For most people, a NUC will probably be fine to host a Plex server all on its own if that’s all the NUC is doing. Unfortunately, I’m not so lucky. I have quite a large library (9 TB and growing) and that requires some hefty disks that I can’t attach to the NUC easily; sure, I could use USB 3.0, and externally attach the 2x 5TB disks I have now, and sure enough I tried that, but I had issues with the disks unmounting at random, which you could imagine causes issues for Plex or any application trying to use those disks.

And so began the research into what kind of requirements the new machine will need to have to be able to run Plex smoothly, while allowing the library to grow in size past the 9 TB it’s already at.

Looking into what Plex requires for hardware video transcoding was simple. Plex published an article about it. An Intel CPU with Intel Quick Sync or an NVIDIA GPU is required for Plex to have the hardware transcoding capability, however the article also points out it requries a Plex Pass subscription. So I purchased a lifetime Plex Pass subscription, since let’s face it, I’ll have Plex around for some time to come, and it allows me to do poweruser-esque things; it’s a no-brainer. I then set forth looking at current generation Intel CPUs.

I stumbled upon the i7-8700k, which with Intel Quick Sync support, looks to be ideal. The next part was finding out what kind of RAM I needed. Doing some research on RAM was a little less trivial, but I found a benchmark analysis that said an i7-8700k has a sweet spot with DDR4-3200 RAM, so I looked for that with the lowest cas latency, and made my way.

I then began asking myself: do I really need an NVIDIA GPU as well? That led me to further research into how Plex video transcoding works. I discovered that Plex can’t do video transcoding inside a VM which seems to be the reason the NUC didn’t work out for me. Though I had tried Plex on the NUC directly, without a VM, and still had poor performance. It was good to know that I needed to ditch the VM if I wanted my Plex server to have optimal performance. I was on the right track. I discovered a forum thread discussing which NVIDIA GPUs have HEVC encoding support, to which the answer was simply “Any ‘GTX’ class 600,700,900, or 1000 series NVIDIA GPU.” A bit further down and you’ll find which model GTXes have certain feature sets for video transcoding, directly correlating to the generation of the GPU. There are some cheap NVIDIA GTX 1050s out there for around $150 USD at time of writing, which would allow my Plex server to be capable of hardware transcoding HEVC formatted streams at up to 8k resolution. With all of that in mind, I decided I may try this route later on if I need an extra push of transcoding power than what the Intel Quick Sync was going to give me. I decided not to purchase a GPU with my new server.

Video transcoding will generate some heat on whatever hardware component is performing the job. Since I want this server to be online 24/7/365, with the possibility of having 8 or more streams going at once, and all of them may require transcoding, I decided the Intel CPU would need some decent cooling. I want to avoid thermal throttling and I want the CPU to have plenty of horsepower to do the task of transcoding. I’ve never liquid cooled before, but I wanted to try it out. I decided to get an AIO liquid cooler for the Intel i7-8700k, following tips from Paul’s Hardware and Bitwit on YouTube for best practices.

Anvil Server Component Close-up

After ordering all of the parts and building the new server, I then began testing the hardware. I ran a live Linux OS and began running benchmarks. I was impressed to say the least. The CPU never got too hot, even after running benchmarks for 15 minutes, I never saw the CPU go over 45ºC. All of the hardware was performing better than expected. I was ready to install the operating system and continue forward.

I’ve been planning a server build like this for a while. One question that everyone always asked me was what OS would I choose to run on it? A few choices I had considered were CentOS, Fedora, and even FreeNAS. I had been telling everyone that I would likely run CentOS 7 on it. Well, I tried that out for a couple of days, and figured out pretty quickly that CentOS 7 is not the most ideal for what I’m doing. I ended up having issues with repositories having packages too old for my use case, and I would have had to resort to installing random third-party repositories to get what I wanted, some of whom I didn’t trust nor wanted to build trust with. So I reinstalled and went with Fedora 29, which has newer packages and is more comfortable to me. I didn’t run into the issues I was running into with CentOS 7. The biggest problem there was Deluge, Fedora has Deluge in their repositories but CentOS doesn’t.

After getting software installed and everything setup the way I wanted it, I then began transferring over the Plex library. I moved the physical disks to the new server and copied the cache, databases, and thumbnails from the VM to the new server. Plex started right up in its new home. The migration went better than I had expected. At this point, I was hours into the task of migrating everything, so if something didn’t work, I would have likely had to spend more hours getting things running. I was glad everything went smooth.

And now? My Plex server has no performance bottlenecks that I can find. It handles people streaming from it at all hours of the day. Deluge got a performance increase too. I am very happy with how the build turned out.

Next steps: installing a RAID array of hard disks and migrating the library over to that. Currently, the Plex library sits on 2x 5TB WD Black drives in striped LVM. There’s no redundancy–if a drive fails, the library is gone. That’s a risk I’m taking right now and one I will rectify in due time.

Here’s the PCPartPicker list: Home Hypervisor & Plex Server

My gaming rig

A bit of story.

In 2012, my brother decided as a gift for going away to college, he would purchase PC parts for me to build my own PC from. There was a $500 combo deal upgrade package from Newegg that he found and made mine. I grew upon that initial build as time went on, adding and replacing parts with newer and better parts, until finally its last build. In 2014, I had better income and could afford to upgrade to higher end parts, so I donated that PC to my sister and built my current gaming rig, which has also since been upgraded over these past 4 years.

The current gaming rig I have started out with much less storage space, a different set of monitors, a different mousepad, video card, and webcam. All of these have been upgraded over time.

In addition, I’ve now added an Intel NUC to my mix of networked devices, which I’ve been using as a utility server. I’ve offloaded nearly all of the virtual machines that my gaming rig has been hosting for 4+ years over to the Intel NUC. This has changed my PC’s performance and resource usage in a favorable direction, and it also means I should be able to take my PC offline more often.

There is one problem I want to tackle, the 2x 5 TB WD Black drives are actually unused. They were originally given exclusive access to a guest virtual machine running my Plex server, and they were configured with LVM, creating a virtual volume of 10 TB. The performance wasn’t that great, however. I transferred the contents of them to a single 6 TB HGST drive and had equal performance with less headache. I want to RAID 1 the 5 TB WD Black drives and then possibly use them for general purpose backup storage, rather than exclusively with my Plex server.

And as for the Plex server, I want to move it off my gaming rig. I’m looking at short-term and long-term right now. In the short, I’m expanding the RAM on my Intel NUC from 16GB to 32GB, so I can host more guests on it, which would allow me to transfer my Plex server. I should then be able to move the HGST 6TB drive to the NUC since the drive is using an external USB3.0 interface. In the long-term, I want to purchase a dedicated storage server, host it at a colocation, and use a RAID 10 array of at least 6×4 TB or 8×6 TB enterprise drives. This would then become the permanent home for my Plex server, giving it the maximum efficiency I can offer.

In the long-term for my gaming rig, I do foresee a possible shift from Intel to AMD. The Ryzen processors have intrigued me. This would mean a new motherboard, which means adopting a new architecture platform. And I’ve never used an AMD processor as my main machine before, either. There’s comfort level hurdles to overcome in that direction. I also won’t need 64 GB of RAM either, since I’m moving the virtual machines to my Intel NUC, so if I build a new desktop system in the future I’ll probably split them to 32 GB. It’s unfortunate that the memory modules between my gaming rig and the NUC aren’t compatible, in addition the NUC has an upper limit of 32 GB memory. Sadness. Maybe I should purchase a second NUC? We’ll see.

My custom PC speaker setup

I recently moved in March 2018 to another apartment unit in my city. I neglected to unpack and setup my computer speakers until earlier this week. What a difference it makes, I pretty much forgot in three months what it was like to have them, turns out I missed my setup and didn’t know it.

This custom speaker setup actually used to be recommended on the r/audiophile subreddit. It’s a basic all-in-one setup that is pieced together by different components.

This setup allows me to connect my PC’s rear stereo output to the Lepai amplifier, which then drives the Micca speakers.

The sound is amazing. I have bass and treble control through the amplifier, which is something I had missed living in my new apartment for the last three months. The Micca speakers are more than suitable for listening to music, no subwoofer required as these Miccas have a woofer built-in that does the job. In terms of volume, I feel like moving the knob more than half way is too loud for my environment, so there’s plenty of upper volume range.

I’ve now had this speaker setup in three different apartments in total. This custom set easily beats any of the all-in-one speaker setups I’ve tried.

A letter to AT&T

Today, I share with everyone an email message I sent to my apartment management about the level of dissatisfaction I have with the AT&T U-verse service.

It’s a bit of a read, but worthwhile I promise.

It was originally sent on March 19, 2018.

Hi,

 

After being at the [redacted] Apartments for a week, I wanted to bring to light my dissatisfaction with the Internet service available at this property. I’d appreciate if this was taken into consideration as well as forwarded to the appropriate AT&T employees.

 

This property exclusively provides AT&T Gigapower for Internet service. I’m extremely unsatisfied with the service provided, specifically because of the egregious level of technical responsibility on AT&T’s part. AT&T provided me with a BGW210 gateway device for connectivity which exhibits the problems described further below. The problems I have reminded my brother and I of similar problems that we had while living with our parents using AT&T U-verse in [redacted], Texas.

 

Please take the mood of my replies below as direct and with reasoned explanations. They are intended to show my frustration.

 

  • AT&T field installation technicians personnel relegated technical support to online and phone service support. This seems like a lack of ownership of responsibility to me. Field technicians should be able to handle technical questions.
  • AT&T online chat services are poorly configured. Consider the difference between http://chatnow.att.com and https://chatnow.att.com, the former displaying a web server setup page instead of the AT&T webpage. As a systems administrator myself, the difference between the data served indicates a lack of thoroughness and professionalism.
  • AT&T online tech support provided incorrect answers and recommendations; I was able to trivially prove this using simple network diagnostics tools such as ping and traceroute.
  • AT&T service regularly has latency/ping spikes indicating poor connectivity or service provisioning. This is easily visible using connection monitoring software such as Smokeping or Pingplotter.
  • AT&T does not provide IPv6 connectivity to residences, thereby contributing to IPv4 address exhaustion. It did not provide IPv6 service to my parents’ [Texas] address on AT&T U-verse despite repeated requests while I was living there for several years of service. AT&T still does not provide IPv6 to even [redacted] Apartment Homes’ brand new apartment complex; the complex is so new that most online address verification services are unable to verify the address. When tech support was asked about this, they stated that IPv6 was disabled and that I would need to speak to “advanced tech support”, then later after the conversation with them was told that the equipment in the area isn’t configured for IPv6 yet as they still have IPv4 available. This reeks of corrupt profit-seeking instead of creating and adopting solutions for customers. IPv6 is a technology that has been adopted by many major Internet providers since as far back as the 1990s, it’s over 20 years old, it isn’t new and should be freely available.
  • AT&T’s BGW210 gateway device is limited to 8,192 NAT table sessions, causing a bottleneck on the number of IPv4 connections that can be open to another device, whether it goes to the Internet or exclusively uses the local network. This bottleneck would otherwise not exist if AT&T permitted the use of customer third-party routers and modems, such as pfSense where NAT table limit can be upwards of 300,000 sessions, or if AT&T permitted IPv6 at this location. Additionally, as an AT&T Gigapower customer with a 1000 Mbps subscription, this bottleneck is effectively a throttle to the Internet, causing connectivity issues with web browsing and online games, as was already experienced in the first two nights at [redacted] Apartment Homes.
  • Drawing on U-verse experience, the first thing that many technicians would do when encountering a problem would be to replace the modem which was provided and required by AT&T. It was replaced so often that the account had been notated (paraphrase from memory) “do not replace the modem, it is not a problem with the modem” since we had gone through so many replacements without solving the service level problems.
  • With U-verse we had service issues so often that my brother ended up in a conference call with the field technician manager, local service manager, and regional manager to troubleshoot the problem. If I recall correctly, it turned out to be a configuration issue on AT&T’s side.

 

In all, AT&T does not provide service to the level of competitors. Other providers allow true bridged mode, allows customers to use their own purchased third-party routers and modems, and have full IPv6 connectivity support. I am disappointed that AT&T does not appear to have the level of technical service I hope for even after over a decade of interactions with them. I wish to protest AT&T being the exclusive Internet provider at [redacted] Apartment Homes, and wish for the property management to consider and prioritize adding additional competing providers such as Verizon Fios as soon as possible.

 

Thanks,
Carl

The only update to occur after this email was AT&T provisioning IPv6 to my BGW210 gateway device. All other issues still persist, including the NAT issue; I found out after the IPv6 was provisioned, IPv6 still uses sessions in the BGW210 NAT table.

AT&T is arguably worse than Comcast.

How gnome-software/systemd software upgrade dropped me to the grub command line, and how I resolved it

Earlier I was messing around in one of my Fedora Linux virtual machines. I’ve had this particular install since Fedora 23 was the latest release. I’ve upgraded it from 23 to 24 and then again from 24 to 25. I completed the upgrade a few weeks back and decided to come back to it. Of course, since it had been a few weeks, after I loaded it up and logged in, GNOME was telling me there were updates available in the Software (gnome-software) application.

Brief warning: this article contains a ton of screenshots as I worked the problem.

Mindlessly, I opened the gnome-software utility and decided to update the system through there. The updates looked pretty benign, whatever. I clicked ‘Restart & Install’ and confirmed. The system rebooted and brought me to the systemd installing updates prompt, awesome. It finished and rebooted once more, however I was immediately dumped to a grub command line. Uh oh.

So I took the #fedora IRC channel on Freenode, explained what had happened, and got some interesting feedback.

First, we tried to identify the problem. We started by trying to figure out why grub command line was coming up immediately while booting rather than the grub boot menu. I found that the grub config file was in fact empty, pretty straightforward then why we’re met with the grub command line.

My initial thought is the gnome-software utility told systemd to reboot for updates, but systemd didn’t install updates correctly. There was probably a kernel update and the grub config got caught in the mess. A little more trial and error with the fellas in #fedora and I was able to tell grub to boot the vmlinuz and initramfs files still visibly present.

Booting didn’t work though. A kernel panic came up while booting, Linux couldn’t mount the root filesystem.

Now I’m wondering if my hard disk is somehow corrupted after the updates. Obviously an update didn’t finish correctly, that much is obvious.

I head back to the #fedora IRC channel and suggest that I try booting the Live CD iso instead to dig around any further, I was tired of messing around in grub’s command line. They agreed this would in fact aid diagnosing the issue, so off I went.

I identified that the hard disk is using a single LVM partition, there are no other partitions on the Master Boot Record (MBR) of the disk. And the LVM partition only has a single logical volume in it called root with the / mount point and is xfs formatted. Pretty strange, and I don’t remember why I chose this layout so long ago.

I decide to remove the Live CD iso from the machine and reboot. What happened next though was pretty weird. I had looked away for a bit to check the IRC channel, when I came back, the machine had booted!

What the hell. Alright, so might as well browse around then. I logged in. I even was greeted with the message that updates were installed successfully!

I decided to open Terminal and verify the disk layout:

Yup. That’s a lvm-xfs partitioned disk. There is in fact no /boot partition, which means the single / partition is the /boot partition. By this point, the people over in #fedora were pretty grateful grub has matured to be able to understand booting LVM partitions, but they were at a loss for words for what was going on as well.

After talking with the #fedora IRC channel some more, I agree to figure out what state causes the machine to drop to the grub command line after upgrading.

I began going through each variable in my tests:

  • Default partitioning format
  • Custom partitioning format
  • Upgrading system using gnome-software
  • Upgrading system using dnf

The default partitioning format is just as you would expect–you load up the Live CD iso with a blank hard disk, you install Fedora to that hard disk using the default partitioning that anaconda chooses for you. No modifications.

The custom partitioning format is a bit different. You start with a single LVM partition and then you create an xfs partitioned with the / mount point:

I and the people in the #fedora IRC both found that installing Fedora with the custom partitioning format and upgrading using gnome-software would cause Fedora to be dropped to the grub command line every time, and only repairable by booting the Live CD iso again and mounting the filesystem so that the xfs journal is replayed after the updates.

Getting to that diagnosis though took several hours, and after confirming with others I opened the bug report over at Red Hat: Bug 1416650 – Upgrading using gnome-software/systemd with lvm-xfs custom partitioning format causes grub boot failure

Hopefully I get an answer back from the Fedora and/or Red Hat teams. Quite a head scratcher at first, but obviously a bug since it’s supposed to be supported and doesn’t indicate otherwise.

Cheers.

I got my car back today

I woke up at 5pm Sunday evening, thinking alright I’ll spend the night chatting with friends and playing games, no big deal. That night, I try to go to sleep at 3am, and I just sort of lay there in my bed until 7am rolled around, and I decided I’m not going to sleep for 1 hour until my alarm goes off. Insomnia is terrible. So, I decide since it’s Monday, before I go in to work at 9am, I’m going to go pick up my car, cause it’s ready to be picked up so why not?

Well, let me tell you. What a day today has been.

A little background: I drive a 2014 Honda Civic. I absolutely love the car since the day I picked it out and drove it off the lot. Fast forward two years later, I was hit by someone on my way to work, to the driver-side rear bumper. The damage wasn’t too bad, the car still drove and functioned as it should, and it didn’t even leak when wet, but still it needed to get repaired since there’s still payments left and there’s an insurance claim. I take it in to my local Honda Collision Center and get situated with a 2017 Toyota Corolla rental.

The Corolla only had around 6,200 miles on it when they handed the keys to me. My Civic had about 21,000 miles. There’s several reasons I chose my Civic over the other entry-level sedans like the Corolla, most of them are preference but a few things stood out immediately even on the 2017 Corolla when compared to my 2014 Civic:

  • The headlights. My Civic has halogen bulbs. The Corolla has LED bulbs. This was a huge difference I wasn’t immediately used to but enjoyed as I rode the rental for a while. LED bulbs are actually pretty neat.
  • The engine, accelerator, and brake. They’re just not the same. I don’t expect them to be, they’re two different cars after all. But, just after being stopped at a red light accelerating to 35 mph, the Corolla would accelerate up to 3500 rpm easily, however my Civic would only need 1500 rpm to do the same task.
  • The rear-view back-up camera. My 2014 Civic’s camera has a HUD that aligns itself with the steering wheel, allowing ease of movement while geared in reverse. The Corolla’s camera does not have a HUD that steers with the steering wheel. In addition, the Corolla’s camera was very foggy even after rubbing and cleaning, yet my Civic has a crystal clear picture.

With that out of the way, it’s 7am, I get dressed for the day and head to the collision center to pick up my car and drop off the rental. They had the car out to me within 5 minutes, pretty speedy no issue at all. Only immediately notice one problem and that was on the driver-side rear seatbelt, the plastic chassis behind the seatbelt wasn’t clipped into the interior wall. They took it back, had it fixed within 5 minutes, and back out to me again. I give it a good look over, I’ve even been described as conscientious before. No other problems that I see. The interior smells like fresh paint, of course. The paint job looks fresh, I can tell they waxed it and put a new clear coat on. I sign the paperwork that I received the car. Cool, I have my car back! But wait, I still have to turn in the rental…

[This post was never fully drafted. It ends there. Sorry.]