A Change of Backup…

September 23, 2019 3:51pm
Tagged with:

This past weekend I finally made the decision to switch backup providers from CrashPlan to Backblaze.

I think.

It seems like I’ve been using CrashPlan forever at this point – at least 5 years now – and it’s honestly something that I just setup a long time ago and left alone … like you’re supposed to with any good backup! 😉

The problem is, and it’s one that I’ve admittedly been ignoring for a while now, is that over the last couple of years CrashPlan’s price has crept up while at the same time its feature set crept down, so I honestly haven’t been getting the value out of it that I once was oh so long ago…

I believe the cost was $5/month/computer when I first started using CrashPlan, and I used it for both my laptop as well as critical files on my home server (which was cool because they had a Linux client that was really easy to use!). Then a few years later, they unexpectedly dropped home support, which was going to double the price in the long term … though in their defense, they offered a 50% discount off home pricing for one year to ease in the transition.

So basically my pricing went from $10 -> $5 -> $20 per month over a few years time!

The bigger hit was that this summer they added a special exclusion for Plex files, which was a big part of what I backed up off my server. I didn’t try to send them my entire library of dozens of TB, mind you, but it seemed reasonable to send them 20 GB of config and metadata so that I could restore Plex easily if the server bit the big one.

In total, I had something like 400 GB backed up with CrashPlan – roughly 200 GB of personal photos and writing and everything else from my laptop, and another 200 GB of Plex config data and some music and other hard to replace archived stuff from my server.

So anyways…

It’s been eating at me for a while that I needed to make a change.

I’ve actually followed Backblaze for a long time because I love how open they are with how they store massive amounts of data. I guess I always just thought that their usage-based plan was too expensive for my needs because I didn’t want to go with another $5/month plan and their unlimited plan doesn’t support Linux anyways.

The funny thing is, apparently when you’re already spending $20/month on backups, that’s enough to store about 4 TB of data using Backblaze’s B2 system!

I think part of the problem has been that whenever I looked at their pricing in the past, I always equated it to backing up my entire data collection – including what’s now 60+ TB of TV shows and movies for Plex – which in turn ends up being something like $300/month and is completely unreasonable for a simple backup strategy!

Yet after now having endured a couple of hard drive failures across my collection, I’m starting to realize that there are certainly subsets of my data that are easier to replace than others. And so instead of B2 being this out of reach backup strategy for all of my data, it suddenly became a new opportunity to go from 400 GB backed up with CrashPlan to nearly 4 TB backed up with Backblaze for about the same monthly cost.

😯

Maybe I’ll do a separate post that’s a little more technical when I finally pull the plug … CrashPlan renews again on 10/10, so I’ve got a couple of weeks to test the waters to make sure I’m truly happy with Backblaze before I cancel one account and fully commit to the other. But so far, I’m pretty satisfied.

I found this free, open source software called Duplicati to manage the backups themselves, and it was super easy to install on both MacOS and CentOS. Within about 36 hours time this past weekend, I had 220 GB from three separate machines backed up to B2, which according to their calculator will run me about $1.10/month, so that’s cool! 🙂

I still need to do some testing on restores to see how that works, but it seems fairly straightforward via Duplicati.

I think in all of my years of using CrashPlan, I had to do one restore and it was 100 GB of music when a drive failed in my server. Their client made it just about seamless, so here’s to hoping for a similar experience with the new guard as well…

So to any sysadmins who do this kind of stuff on a daily basis, this is going to seem way obvious, but for somebody who doesn’t and has been struggling with this literally for months … let’s just say I’m pretty happy to finally have figured this out!

Also, this post is mostly for documentation’s sake so that I have a place to look back to when I need to do it again sometime many moons into the future…

It’s hard to believe that it’s been over a year already since I migrated my Plex server off of my old desktop hardware over to a proper rackmount server. Or at least Plex itself migrated, while the bevy of hard drives that 50+ TB of media lives on still resides in that aged and ever-waning PC.

Anyways, last June when I made the big leap to server-grade hardware, I only had a single hard drive to run VMs from for the new machine. For simplicity’s sake, I set it up as a RAID 0, single disk array, with the understanding that I could “easily” add more disks a few months later and re-configure that array into a more resilient RAID 5.

In fact, according to Amazon I did buy two more drives to use for said purpose in September 2018.

And just yesterday I finally got them working!

You see, it was probably too easy for me to setup that initial RAID 0 array via the new server’s BIOS. At the time, it seemed simple enough to add more drives to the pool and then reconfigure the array itself.

But one thing I’ve learned somewhat painfully since I first set this server up is that everything is more picky than that. Versions have to line up with the hardware, and older versions lack features supported by newer versions, even while they’re all being supported by the companies in parallel. This isn’t really news to me, but it’s certainly something that I never had to scrutinize to this extent.

With my old desktop server…

  1. Connect new hard drive.
  2. Find it in the CentOS Disks GUI, quick format it, and mount it.

With my new server…

  1. Connect new hard drive.
  2. Try to add it to my RAID pool via the RAID controller, but you can’t.
  3. Try to add it via ESXi, but you can’t.
  4. Try to connect via Dell OpenManage, but I didn’t install the server-side software in ESXi right because Dell’s support page for this server only goes up to ESXi 6.0 even though I’m running 6.5 and then I finally find the right software on a support doc found via Google.
  5. Try to connect via Dell OpenManage, but they only make a Windows client so I have to find a laptop to do that.
  6. Try to connect via Dell OpenManage, but the server doesn’t have a certificate and the login failure doesn’t mention that this is a big deal, so you just guess until you see a checkbox mentioning ignoring that and finally it works!
  7. Add new disks to RAID pool and reconfigure from RAID 0 to RAID 5 … and wait a very long time.
  8. Worry that I didn’t make backups of my VMs because I couldn’t figure out how to do it precisely the entire time.
  9. Try to expand virtual disk via ESXi now that the extra space is available, but it still doesn’t see it.
  10. Confirm via Dell OpenManage that the reconfigure is definitely done now and showing the extra space as available.
  11. Wait until 1:30am when nobody is using Plex and just reboot the whole thing, just in case.
  12. Try to expand virtual disk via ESXi, and now it sees it!
  13. Allocate additional space to new VM and reboot that VM, but it does nothing.
  14. Spend an hour Googling for instructions about how to allocate the new space inside of the guest OS until I finally found this random support post that ends up working not unlike magic!
  15. Verify that the new disk space is finally ready to use in the VM, and then debate whether it’s going to be enough or if I should’ve bought yet another disk just in case…

I mean, looking back logically it does make sense – first add the physical drives, then add them to the RAID pool, then rebuild the RAID array, then add new space to the Virtual Disk, then allocate the new space to a specific VM, then update the VM to recognize its new resources … maybe I was just hoping it would be slightly more seamless, even if only in parts! 😛

If anything, I guess it should be a tad easier the next time around, and now that I’ve gotten the bugs worked out of OpenManage, that alone is one less headache to worry about.

That said, I don’t want to rely on my work laptop for managing this server (and others down the road) indefinitely, so it also means I need to put together some sort of Windows box to sit in the corner and collect dust until it’s needed once in a blue moon…

Still, my Plex environment … minus the media itself … now lives on a cushy, new RAID 5 array that could sustain a single disk failure without missing a beat, plus I’ve got some extra cushion for downloading new stuff to boot.

Not too shabby for only ten months worth of work!

Cleaning Day, iPhone Edition

January 28, 2019 5:34pm
Tagged with:

How many apps do you have on your phone right now?

Apparently I have over 130 apps, and honestly when I started going through my list, there were some that I haven’t used in years. And some that don’t even work with the version of iOS I’m on anymore.

And also a lot that I do still use, but not nearly enough to be where they were in my seven pages of apps.

So I decided to do a little digital housekeeping!

I did this with a few goals in mind…

  • To reprioritize my social media apps to make them less distracting.
  • To reorganize my apps so the ones I use more are closer to the front.
  • To finally delete stuff that I either never use or I can’t use anymore.

Step 1 – Inventory My Current Apps
The first thing I did was simply type up a list of every app I had on my phone. I did mine in a spreadsheet and grouped them by page so I could better visualize where I was and where I wanted to be. It was a little tedious to type everything out by hand, but the benefit was that afterwards I had a list that I could easily copy & paste from to shuffle apps around as I designed my new layout.

Step 2 – DELETE THE CLUTTER!!!
Now this is a pet peeve of mine because there are a handful of games and apps that I’ve downloaded over the ages where the developers either didn’t keep up with newer versions of iOS or just flat out went out of business altogether, so now I’m stuck with all of these games that I paid money for that are absolute garbage.

This was the time to finally let ’em go, I switched my auto insurance along with other apps that I’ll never use. Case in point – the other day , so no need to keep the old company’s app anymore.

Step 3 – What do I use the most???
This is how I determined my dock apps. You only get four, so make ’em good!

I ended up only making one change here – previously I had Phone, Mail, Safari, and Twitter, so in an effort to curtail my social media time I swapped out Todoist for Twitter.

Step 4 – What do I use on a regular basis?
This is where I determined my home page, or pages, really, because I had more than one page of apps that fell into this category.

My criteria for these pages was first and foremost general purpose apps – think Calendar, Contacts, Camera, Weather, Calculator. I also put my music apps here because they’re my go-to in the car. And then WordPress, Notes, and Analytics from a writing perspective.

My overflow home page then got other research apps like Wikipedia and Dictionary, the app and music stores, Settings, the My Disney Experience app for visit the theme parks along with its companion shopping app, and lastly, a couple of apps I’ve been using for meditation.

Step 5 – Where does everything else fall?!
To be honest, by the time I had the first two pages done, I knew what I wanted to do with the rest anyways. First I created pages for each of the major categories of apps that I had, then I sorted them into those categories (a lot of my apps were already sorted this way, so this was about 50/50 clean-up vs reorganization), and then lastly I sorted the pages themselves based again on what types of apps I use the most.

I ended up with something like this…

  • Home Page
  • Overflow Home Page
  • Social Media
  • Kids Stuff
  • Banks & Restaurants
  • Quick Games
  • Long Games
  • Miscellaneous

I honestly don’t play games much on my phone, so those went at the back, second only to those seldom used apps that I didn’t want to delete like FlightAware for tracking incoming flights or Speedtest for testing Internet connections or my web host’s app that I use to remotely reboot this server if I see that it’s having problems when I’m away from a computer.

I made a new page of just apps that I have for my kids to use … in hopes that maybe they’ll stay out of the rest of my stuff! 😉

And then the others are pretty self explanatory.

BEFORE

AFTER

One sort of tip that I can offer is that it is technically possible to move multiple apps from one page to another … the reason for my hesitation is that at least for me, it was really touchy and sometimes more of a pain than it was worth. I’d find myself with a few apps selected, then drop them all trying to get the next one, or there would be a few that I couldn’t select at all. Still, you can give it a try and see how it works for you…

Cleaning up my phone has been something that I’ve been putting off for a long time, but honestly it took maybe an hour once I finally sat down and just did it. don’t recommend skipping the list and just trying to reorganize right on the phone, namely because if you don’t plan out precisely how many pages you need, it’s easy to find yourself doing a lot of extra shuffling when you realize that you’re a page short in the very middle of your layout!

Here’s to hoping that this will make my phone a little less distracting and once again more of a useful tool in my effort to try and actually get things done this year!!!

oh poopies…

December 20, 2018 1:13am
Tagged with:

So apparently I had a hard drive crash in my Plex server while we were on vacation last week.

Possibly two … still looking into that!

What’s really weird is that throughout the whole week, whenever I’d connect remotely (we typically tether Sara’s iPad to the TV so the kids can watch their shows on the road), the drive that holds most of the kids movies & TV shows was reporting as missing … however when I got home and finally had a chance to troubleshoot, it turns out that drive was fine and it was a completely different one that was clicking away like a hard drive on its last legs!

…one that oddly enough, we were able to connect to on vacation…

Now I’ve got that one unplugged until I can see if I’m able to copy the contents over to a new drive … need to address that sooner rather than later because it’s the drive where all of our new TV shows download to before they get copied to wherever they actually live.

As for the other impacted drive … I don’t know what the deal is there because it mounts ok, but the OS just doesn’t recognize it. Really hoping it’s something I can easily repair so that I don’t have to hassle with re-downloading everything…

I mean, the first one doesn’t really surprise me because it was one of the oldest drives in the server (almost 3 years old) and it honestly gets a lot more use than all of the others. If anything, I can’t help but think that if I had already been able to upgrade to my shiny, new rackmount NASthe system would’ve recovered automatically and this would’ve barely counted as a blip on the radar!

That said, I’m happy that at least Plex itself is up again because I’ve had it down for two days now because the NAS part didn’t want to reboot thanks to the dying drive, so at least the kids can go back to watching their usual shows without having to “borrow” Grandma & Grandpa’s Netflix account.

It’s kind of amazing how much you end up relying on this stuff without realizing it. I definitely need to start exploring some backup options next year as this 50 TB media library continues to grow. 😛

I get a little antsy about my home Internet speed when I spend any amount of time planning out home server stuff, and considering my little purchase of 50 TB of hard drives the other day…

In a way, it seems only natural – my next steps are to migrate the storage part of my media server into a rackmount NAS to go alongside the other rackmount server I acquired earlier this year that now houses the rest of Plex and the tools that I use to download content.

I’ve already picked out some new Ubiquiti rackmount network gear that I want to replace the router from my ISP with…

…and today I was even looking into the option of running 10 Gbps connections between my servers because, well, the only thing cooler than moving files around at 125 MB/s is moving files around at 1.25 GB/s!!!

So yeah, when we’re talking about internal network speeds in excess of one gigabit, it’s hard not to glance at the weak link in the chain that is my Internet connection and wonder, “Why can’t you keep up, little guy?!”

And don’t get me wrong – I totally get that only 25% of the country currently even has access to fiber Internet and a lot of people are stuck with cable or even DSL … but that doesn’t make it any easier to swallow that the line currently running into my garage could be chugging along at a crisp and refreshing 1 Gbps, but instead here I am scrapping by with a mere 200 Mbps like a chump out of the stone age…

Truth be told, I just moved up from 150 Mbps to 200 Mbps this fall, but before that I’ve been sitting at 150 Mbps for almost 4 years now. In fact, I upgraded just before Verizon sold FiOS in Florida off to Frontier because I was afraid they’d make it a lot harder to upgrade in the future…

Foreshadowing!

To be honest, I have kind of a love-hate relationship with Frontier because the FiOS network itself is wonderful … it’s just that Frontier themselves isn’t a very smart company to be running it. Their customer service is typically awful, their pricing isn’t competitive, and lest we not forget, this was the fiber company previously ran by the CEO who thought that gigabit was a fad and consumers don’t really need it.

Sure, maybe not now, but what kind of a technology company doesn’t anticipate their customers’ needs well into the future?!

Anyways, I’ve been going back and forth with Frontier on various social media channels about how it isn’t fair that they only offer promotional pricing to new customers. They’ve actually argued back that it’s an industry standard and everyone does it … as if that makes it ok … and maybe it would, if only they didn’t charge half again as much for existing customers once those crazy promotions run out!

Seriously – I currently pay $75/month for a plan that a new subscriber can get for $50/month.

…and they can’t find any way to incentivize me sticking around for seven years now?!

I think what bugs me the most is the disparity for upgrading to the tiers above me because $10-20/month extra would be understandable, but that’s not what Frontier’s fee structure looks like…

  • 200 Mbps – $75/month
  • 300 Mbps – $125/month
  • 500 Mbps – $175/month
  • 1 Gbps – $225/month

Another fifty bucks for each leap is excessive, particularly when the likes of Verizon and AT&T and Comcast all selling gigabit access in their markets for around $100.

Even Spectrum, our local cable alternative, offers gigabit for $100, although the argument there is that they don’t support symmetrical speeds yet, so the upload is still way lower than the downstream … at least for now.

I told the account manager I was emailing with earlier today that I’d be happy to pay an extra twenty bucks to go up to 500 Mbps or $125 … hell, I’d even do $150/month for gigabit, despite it being almost double what Verizon is charging for the same service!

But when did we get to the point where $50 upgrades were the norm … unless Frontier simply doesn’t really want to sell these highest tiers and they figure if people want them badly enough, they’ll pay through the nose for them.

I suppose this is technically offering gigabit service, but not at a price where it will ever get widely adopted, that’s for sure…

It just makes me wish that Verizon had never sold us off, or that Frontier would hurry up and go bankrupt already so that someone else could swoop in and buy all of the assets from them. It’s sad that broadband rollout hasn’t been far more aggressive in the United States because it’s not like these companies don’t have the money to do it, and we’ve a million times over proven the value of high speed Internet access in our daily lives.

I really don’t like this direction we’re heading where Verizon is convinced that wireless is what we need for broadband – mostly because of how they love to charge by the GB for it – and right now they’ve got their stooge heading the FCC that’s dedicated to gutting any and all regulations holding them back from maximizing Internet profits for shareholder benefit.

Amidst all of my frustrations this evening, I actually found myself pondering if it would be worthwhile to try load balancing between two ISPs … for the same $175/month that Frontier wants for 500 Mbps, I could keep the 200 Mbps line that I have with them and buy a second, gigabit connection from Spectrum to try them out as an ISP and enjoy the benefits of that extreme download speed!

The thing is, as much as Frontier insists that I’m a valued customer, even though they won’t offer me a dime to stick around despite not having to pay the acquisition cost to earn me back again as a new subscriber already, you would think that they would be quick to stop an existing customer from testing the waters with the competition. You’d think that an extra $75/month would still be far better than negative $75/month for a lost customer…

…but Frontier doesn’t think. That’s the problem!

I know that I’ll get gigabit Internet here at home eventually … hell, it has me wondering if we’ll see 10 Gbps home connections still in my lifetime! But much like Veruca Salt, I want it now! 😉

Archiving Fun

July 3, 2018 10:00pm
Tagged with:

Now that I’ve more or less got my server upgrades under control, the last couple of weeks I’ve been really enjoying making use of that new computing power and filling up my array of hard drives with all sorts of neat, random things that I’ve stumbled across online.

Stuff like PDFs of Interaction magazine – published by Sierra Online at the height of their rule of the adventure gaming genre, I used to read this thing from cover to cover and ordered a lot of my favorite games from the 3-for-1 sales that they’d feature.

Or old videos of Welcome Freshmen – this weird, sketch comedy about high school that Nickelodeon aired when I was like 12 years old that helped prepare me for all of the girl angst and bully encounters that my own high school experience would come to offer!

Or even very old videos of the very first season of Sesame Street from 1969 – did you know that not only did Oscar the Grouch start out being orange, but that the Muppet characters actually played a fairly small role in the initial episodes of the show???

The last couple of years I’ve found myself becoming more cognizant of the temporary nature of the Internet – simply put, knowing that a site or article or video you enjoyed six months ago could very well not be there if you wanted to go back and check it out again today. And that can be for any number of reasons…

  • the website went out of business
  • the person maintaining it passed away
  • the host got a DMCA notice and took it down
  • the creator changed their mind and took it down themselves

I’ve lost access to some great works over the years, and others I still have only because I had the foresight to save a copy for myself, so now that I’ve got servers sitting in my closet with disk space to spare, the thought has occurred to me that maybe it’s worth personally archiving some of my own favorite content so that it’s still around 20 years from now regardless of whatever happens to the originals on the Internet itself.

I’ve always really liked what the Internet Archive does, particularly with their Wayback Machine, just because it’s super cool to be able to look back at websites from when the Internet was still at its infancy … even sites that I put together myself! Right now they’re storing something like 30 petabytes of data covering everything from websites to books, TV shows, YouTube channels, software, photos – you name it!

And while I’ve got a long ways to go before hitting my first petabyte of storage, it’s also neat that the same tools that they use to archive things are available to me to run on a much smaller scale.

I remember always having sort of a love-hate relationship with my DVR once I finally got one because although I loved the idea of recording my own shows digitally and having them accessible whenever, I hated the limits of the small hard drive that they included and having to pick and choose what to keep and what to delete … because what if I do want to watch episode #68 of The Simpsons at 3am without fishing through a box of DVDs???

Mr. Plow, BTW! 😉

The On-Demand channels of digital cable were cool, but as content began to grow, channels themselves would have to pick and choose what to offer – here’s season 2 of Curb Your Enthusiasm, but if you want season 1, you’ll have to buy the DVDs…

And even streaming services like Netflix and Amazon Prime Video and iTunes today have their limits because they’re constantly negotiating licenses with all of the studios – there are entire blogs dedicated to what’s coming and going on Netflix in a given month.

Although I’ve never really hit the level of a hoarder in real life, although I do hate to throw away things that I think I might be nostalgic for later, I’m very much a digital hoarder because hard drives are cheap, it’s a fun way to look back at the past, and it’s surprisingly convenient to access these days when I’ve got entire Christmas tree boxes of DVDs and CDs sitting on a few hard drives in my servers that can then be accessed from any TV or device that I own, 24 hours a day.

I don’t need to wait for FX to run another The Simpsons marathon or wonder if my cable provider offers access to their On Demand thingy because I’ve got 638 episodes sitting on 340 GB of space in a server that *I* control to watch whenever I want.

And of course, that’s the crux of digital hoarding – just because I could doesn’t mean that I ever will, but still…

Ultimately it’s hard to tell what will be “of value” decades into the future – sure, people still probably won’t get much out of the random pictures that we take of our lunches, but it’s one of those things that we don’t really know until it’s too late unless we think ahead and preserve copies of our history just in case. Right now historians are pouring through old books and VHS tapes for content from before the Internet ever existed that will essentially be lost in another twenty years if someone doesn’t take the time to digitize and archive that kind of stuff today.

The other day I stumbled upon this old post from the Internet Archive of a propaganda video created by the US government back in 1943 when they were rounding up Japanese Americans to send them to internment camps after the Japanese had bombed Pearl Harbor. It’s surreal to watch simply because of how positive the narrator talks about this horrific crime that our great grandparents committed in the name of national security, and it’s all the more relevant today as we see escalations around public perception and immigration, and yet with that video predating even VHS tapes, if a historian hadn’t taken the time to archive it, it would’ve just been lost in the annals of time.

I’m not saying that old podcasts and sitcoms will have the same relevancy as historical films, but there are many facets to historical value to a society.

I’ll be sure to post more as I collect more things and evolve my thoughts on this topic, as over time I think they might grow into a more formal effort, whether it’s working with the IA or who knows! 😉

Virtualization Fun

June 23, 2018 1:15am
Tagged with:

So after a handful of learning curves over the last couple of weeks, my Plex server officially has a new home!

Although my hope was to this summer be able to afford the new Synology NAS that I’ve been eyeballing for a couple of years now, I recently found myself in a position to instead upgrade pretty much everything else at a price far more affordable than that NAS, so here we are. 😉

Almost two weeks ago to this day, I discovered /r/homelabsales – a swell subreddit where fellow computer geeks are looking to offload old computers … particularly server-grade hardware that they themselves have acquired on the cheap to play around and learn on. Surprisingly enough, the same night I found the subreddit, I also found someone here in Tampa looking to get rid of a nice, little rackmount server – a Dell R10 with dual CPUs and a small amount of RAM that was still 6x what I was running in my old server!

The cost of $140 seemed pretty good at a glance, so the next day I met up with the guy and drove home with a new-to-me server whose box filled almost the entire trunk of my car… 😯

Since last week, I’ve given myself a bit of a crash course in virtualization – I’ve used plenty of VMs over the years myself, but I’ve never administered one, so I picked up a free copy of VMware’s free software and started tinkering with it. I definitely made a few mistakes along the way, mostly with regards to oversubscribing resources, but I think that’s mostly all behind us and as I type this now, I’ve migrated the Plex application itself over to its own new VM on my new server and I’m working on moving the various download tools that I use to get my media into their own VM as well.

The plan is basically to turn my old server into a de facto NAS – because its only role going forward will be to house hard drives – which will hopefully help to extend its life a bit longer by offloading all of the downloading and transcoding onto the newer and more robust machine, at least until I’m able to pickup that fancy NAS and retire my old desktop hardware turned home server altogether.

It’s crazy to see how far that thing has grown in only a couple of years! When I first started using Plex back in the fall of 2014, I think I had about 1.5 TB of media almost immediately. Six months later I was up to 20 TB, though things admittedly slowed down a bit from there … at least temporarily. Now 3.5 years later, that storage array is up to about 42 TB across 8 disks – two of which are external USB drives because I physically ran out of SATA ports in the box and the last time I messed with adding a new drive on an expansion card, it wiped out a 4 TB disk without a second glance, so I’m rightfully so a little nervous to touch anything else inside until I’ve started migrating data to a better solution!!!

But really, what I’ve got now has been serving me great – the few TB I still have free should last me until I’m ready to make that move and at this point there really isn’t that much more for me to add … or at least not stuff that can’t wait until space isn’t an issue again, anyways.

As for the new server, it’s admittedly pretty neat to watch 16 cores handle more Plex transcoding than I have kids and friends combined right now! The other day I did a test run and started streams on every device I could find in the house – three TVs, my phone, Christopher’s iPad, and my PC – and even with a couple of them transcoding, there was still plenty of overhead to spare, so that makes me happy. I’ve actually been able to use some of the new horsepower to convert 4k videos into encodings that my TV can actually handle, so it’s been neat actually getting to watch some 4k content for a change, too!

If anything, it gives me something new to play with while I save my pennies for the next upgrade.

…and figure out where I’m going to fit a server rack in my bedroom closet…

Machine Learning for a Better Search

April 30, 2018 9:56pm
Tagged with:

I wanted to expand more on the comment I made earlier on my micro-blog about how to build a better search function because the more that I think about it, the more I believe that this addresses one of the Internet’s biggest problems right now.

We went from limited information before the digital age to endless information a few decades in, but now what we really need to focus on is putting the right information in front of people.

Or, as my micro example cited – it should be easier to find the source of a topic than it is to find commentary about that topic.

And as if grading your sources wasn’t difficult enough, I’m going to throw one more curveball into the mix – you can’t blacklist an article based on its publisher, with my thought process here being simply that sure, 95% of what places like Fox News and Breitbart post is absolute garbage, but…

  1. We want everyone to use and rely on this new search method and people aren’t as likely to jump onboard if their favorite sources, damned as they may be, are automatically excluded from the mix.
  2. But more importantly, even if 95% of what someone writers is pure drivel, we want to encourage that remaining 5% to rise above the rest because that’s how you change opinions.

Now most of this is well beyond my level of expertise, but I know that there are methods in use today to determine “the quality” of a body of text based on sentence structure, vocabulary, etc… The question is, how can we expand on that logic to categorize stories based both on quality as well as what they bring to the table. Because hey, there’s a lot of opinion on the Internet and I certainly don’t want to discount that – I’m just saying that when somebody searches for a topic, they should be presented with facts first and editorial second.

It gets even trickier when you don’t have a fairly clean example like the one I used – even with regards to the White House Correspondents’ Dinner, there were multiple videos that contained the full speeches from the dinner … some were censored, some were from different outlets … but what about when it’s not even that cut and dry?

A video of President Trump saying XYZ would be the most accurate source, but if instead you have news reports sharing what it was that he said – and possibly some with more/less context or fact correction in their articles – then that becomes very subjective to try and decide which one did the best job of reporting XYZ that then deserves to be at the top of the search results.

I kind of have a love/hate relationship with Google these days because I know that they’re trying to filter out the literally billions of pages on the Internet, and they do say that they look at things like user experience and reblogging to help rank their results, but at the same time I still see those hideous, clickbait ads from Taboola and Outbrain on some of the biggest websites seemingly without penalty.

How does a search engine remain independent while trying to sort relevancy as well as fact from fiction, alongside people constantly working to game the system to get their garbage to float to the top to make the ad bucks???

Maybe it’s time to learn a thing or two about machine learning and get to work on this… 😉

Whatever happened to RSS readers???

April 23, 2018 9:19pm
Tagged with:

I guess they just went away with the rise of social media and apps and notifications, though for what it’s worth I always found that a bit silly because I don’t want a dedicated app on my phone for every single website that I visit!

…not to mention, what about the ones that don’t have apps … like mine? 😯

In continuing with my hiatus from social media, this has been somewhat of a challenge for me because there are definitely sites that I still want to keep up with, but I might not necessarily want the rest of the chatter of following them on social media, and not for nothing but algorithmic sorting makes it harder and harder to see stuff that I actually want to see, anyways!

So I stumbled back across this feature built into WordPress.com for subscribing to blogs. It was originally designed specifically for blogs hosted on WP.com, but was eventually extended to all WordPress blogs via Jetpack and now it looks like you can follow just about any site with an RSS feed because I’ve setup follows with blogs on Blogger and Typepad, too!

It’ll be interesting to see if it scales out well if I want to add a couple dozen more sites to be able to include news outlets and whatnot in addition to my writer friends and folks I’ve come to admire online, but for now it’s honestly just nice to get a list of posts in the order that they were actually published as opposed to the order in which an algorithm thinks I want to read them … with plenty of targeted ads interspersed, no less!

Oops – no HTTP/2 today…

April 23, 2018 4:08am
Tagged with:

Note to Self: DON’T MAKE SERVER CHANGES WITHOUT WRITING DOWN WHAT YOU’RE CHANGING FIRST!!!!!

So … about 11 hours ago, I thought that I’d try to upgrade my web server to use HTTP/2.

It sounded like a great idea after reading this article from Yoast, so I spun up Easy Apache and found the mod_http2 option. It mentioned that I needed to switch from one MPM to another, but I didn’t really think much of it.

To make matters worse, I also used the same time to uncheck a few random Apache and PHP modules that I didn’t think I needed.

As soon as I restarted Apache, sites already started looking hosed. 

Some wouldn’t even render their CSS, others were missing random images. But I didn’t know enough about HTTP/2 yet to realize whether I had actually screwed something up or if I just needed to make some modifications to WordPress to get everything working correctly.

At one point I thought that maybe all I needed was this HTTP/2 Server Push plugin, as I started to understand that HTTP/2 handles requests a lot faster, so was it possible that the browser was just getting the CSS file and other images too late and didn’t know what to do with them?

No, not really.

I also dug deep into caching issues, which is always a mess because I run Varnish and some of my sites use W3 Total Cache, though it’s currently disabled on my multisite install due to weird config issues. I also cleared my own browser cache and tried other browsers, but no luck.

Eventually I started to dig into the whole some images loading but others weren’t thread, and even more peculiar – I run three WordPress installs on this server … two multisites and one standalone, and only my big multisite install had issues!

This got me thinking back to some of the permissions issues I’ve had with Apache and PHP while trying to get APC working (quick summary – APC is supposed to be wicked fast, but won’t run under the SuPHP handler, only DSO … which handles permissions for running Apache different than SuPHP). What was weird was that images I had uploaded recently were missing, but the older images were fine … and note that all of the files were still present on the file system itself.

I gradually conceded that I needed to give up on HTTP/2 for now and roll back to what I had before, though this was a giant pain because I’d run Easy Apache so many times that something got corrupted in the config and I ended up making the biggest changes using YUM via SSH.

I got moved back from mod_mpm_event and mod_http2 over to mod_mpm_prefork, though that didn’t seem to make a difference.

Then on a whim I reinstalled mod_ruid2 because of this helpful explanation – Run all httpd process under user’s access right.

AND BAM – MY SITES ALL STARTED RELOADING PERFECTLY AGAIN LIKE MAGIC!!!

Well, almost like magic. I still had a lot of plugins to reactivate and other troubleshooting steps that I’d taken to reverse, but now … as far as I can tell … my WordPress network is back to the way it was 11 hours ago before I decided to try and setup HTTP/2 “on a whim!”

Clearly I need to do a lot more research into it, and also probably spin up a test site or something, before I start monkeying with that hassle all over again. 😛

© 1999 - 2019 Comedic-Genius Media, All Rights Reserved.