Lesson Learned – Test Your Backups!

I like to think that I’ve come a long way since I first started noodling around with Plex back in … wow – 2014?!

What was first an install on my old desktop PC with way too many external hard drives tethered to it via USB has since grown to span several separate servers for NAS, downloading stuff, and Plex itself. Pretty much everything has disk redundancy via either RAID or Unraid, and although only a small portion of my media library itself is backed up, the rest is backed up in triplicate between images of the entire VMs themselves, local backups on the NAS, and then cloud backups to Backblaze as well.

So then why did it take me nearly a day to restore Plex when I had a random glitch earlier this week???

I’m still not really sure what happened. I came home to find that one of my two UPSes had powered down, which turned off all of my network gear but otherwise the servers themselves were still running as they’re connected to both UPSes with redundant power supplies.

Once I got the network back up, I found that Plex wasn’t responding – likely because it lost its direct connection to the NAS – so I was going to reboot it once everything was back up, except it didn’t want to respond. After waiting a while and shutting down the other VMs correctly, I finally had to kill the power to get ESXi to reboot and when it came up, Plex wouldn’t start because one file – Preferences.xml – was suddenly empty!

Of course, as you might groan along with me, that’s the file where Plex stores all of your server settings – port forwarding, tweaks to the scanning rules that I’ve made over the years.

I didn’t really want to redo everything, so I figured I’d just restore it from the backup. I use Duplicati for all of my backups and having done a few restores, it’s usually super simple. Just pick what you want to restore, where to drop the files, and you’re all set.

…except that in my previous restores, those datasets probably consisted of thousands of files whereas the backup for my Plex folder runs upwards of a million files.

😯

And apparently this is problematic because when Duplicati makes its list of files to restore from, naturally it has to traverse the entire dataset. My first two or three attempts all failed miserably because it would just spin at each of the top folders and then eventually another backup process would try to start and lock the database, thus killing everything in the process!

Once I realized to turn the other backup task off, it was A LONG WAIT in between each level, but after maybe an hour and a half just to browse to the file, the actual restore literally took maybe 2 minutes!

If Duplicati had a way for me to restore by path instead of using the GUI, it would’ve been a lot simpler.

Or if the backup files got stored by Duplicati on Backblaze in a way I could browse them remotely instead of as encrypted archive files, that would’ve worked, too.

In hindsight, I think what I need to do is write a simple script to tar up Plex’s Metadata folder before backing it up so that it’s one file instead of 600,000, and that might win me a few favors with the Duplicati gods.

As they say, we live and we learn…

Leave a Comment

Your email address will not be published. Required fields are marked *