Replacing your server is not always necessary after encountering boot failure

First, a little background context before I dive in to how I recovered the virtual machine from boot failure…

  • The host has three physical disks mapped to the virtual machine which I’ll be recovering.
    • One disk is a 256 GB SSD and is unused/unallocated, the other two are 5 TB HDDs paired together via LVM to create a single 10 TB disk inside the guest’s filesystem.
  • The host runs Windows 7 64-bit, runs VirtualBox 5.1.10, and the guest runs Fedora 25 64-bit on Linux kernel 4.8.15-300.fc25.x86_64.
  • The guest uses SystemD and is referred to as ‘seedbox’ in my logs and snapshots.

Earlier this morning, the host completely froze, not even so much as a BSOD. There is a hardware problem with the host’s RAM that I have yet to completely diagnose. I ended up pushing the reset button on the chassis. All running guests were immediately aborted due to my power reset.

The host system rebooted just fine. Upon turning back on all of the guests, the ‘seedbox’ guest entered into SystemD emergency mode while it was booting. Uh oh.

Turns out that when the host rebooted, the physical disk mappings present for the guest were changed by either Windows or the host’s BIOS itself, and VirtualBox was not aware or did not track the difference. The SSD and one of the HDDs were no longer the correct physical disks. The LVM volume group was therefore invalid and auto-mounting the LVM logical volume failed. This was the root cause of boot failure within the guest.

Okay, so we’ve now identified the cause of boot failure. What about recovery? Is all the data corrupt now?

The guest failed to mount the LVM logical volume, which prevented it from possibly writing to the disks, causing corruption. The SSD and HDDs were never mounted by Linux, not even in read-only, so they were preserved during the boot failure. This is a good indicator that the data should be completely intact if we can rebuild the mapping.

Looking into correcting the physical to virtual disk mapping, I found out that due to the shift in mappings, the newly assigned physical disks were actually possibly catastrophic. One of them was the host’s boot drive, which even VirtualBox says NEVER to mount in a guest: “Most importantly, do not attempt to boot the partition with the currently running host operating system in a guest. This will lead to severe data corruption” [VirtualBox Manual 9.9.1]. I quickly commented out the auto-mount for that LVM logical volume and powered off the virtual machine, then looked into correcting the mapping via the host.

In VirtualBox, if you want to map physical disks to virtual ones, you need to run a utility called VBoxManage as root/administrator, and call some internals of the utility that are vaguely documented in section 9.9.1 in the advanced topics chapter of the VirtualBox manual (link). This is non-trivial for those just getting started with virtual machines and disk management, since there’s a high risk for corruption, as the manual indicates. If you map your host’s physical boot disk to the guest, then try to boot off of it, you’ll almost always cause severe corruption, this is what the manual is warning everyone about.

So off I went. I opened up the Command Prompt in administrator mode on the host and went to the directory the virtual disk files were in. In the VirtualBox window, I removed the disks from VirtualBox’s Media Manager, then switched back to Command Prompt. I then replaced the .vmdk files with new copies that had corrected physical drive ids. I then re-added the virtual disk files into VirtualBox, attached them to the seedbox guest, and booted up the guest.

The guest booted up successfully. Since I uncommented the failing drive configuration, it was able to boot normally, with only services relying on that mountpoint failing to run correctly. The core system was intact.

I uncommented the auto-mount line for the LVM logical volume I commented out earlier, re-mounted all auto-mounts via mount -a, and then listed the directory inside the LVM logical volume. Success!

I then rebooted the virtual machine and it came up all on its own in normal boot mode, no more emergency mode or boot failure. It acted as if it never fell ill.

To anyone this helped out or if this was a great story to read, I encourage you to follow me on social media or this blog, see you next time!