#ServerProgress

December 20, 2016 8:56pm
Tagged with:

Update: Got the scottsevener.com network of sites moved back over to its new home, everything is resolving correctly and not in 48 seconds per page load, and it’s being served through Varnish + Apache … woohoo!

Mind you, I’m not entirely sure that it’s configured correctly because speeds aren’t tons faster, but we’ll work on configuration tweaking another day … I’ve got so much catch-up writing to do now… 😛

Troubleshooting page load speed, part 45…

December 20, 2016 3:31pm
Tagged with:

For the record, I’m painfully aware that the page load times on several of my sites have been, well, unbearable as of late!

Admittedly it’s a problem that I’ve been stalling on for some time now, despite the regular high load emails that I get from my server. A few days ago I got one saying it was as bad as a 5-minute load average of 68.94 … and this is on a VPS with three CPUs and 5 GB of RAM that does honestly pretty light traffic right now, unfortunately…

I had been hoping that most of it could be chalked up to an OS issue because I recently discovered that cPanel had stopped updating on me after reaching a version threshold where the latest would no longer run on 32-bit operating systems, which I was kind of surprised to learn that I had, but again, this VPS was setup back in 2012 so I suppose 32-bit could’ve still been the default four years ago.

The trouble is, there’s really no clean way to upgrade my server from 32- to 64-bit leaving everything intact, so it requires spinning up a new machine and migrating everything over to the newer OS.

Plus, the way I migrated four years ago to VPS from my plain, old shared hosting account of maybe eight years was using cPanel’s built-in account transfer features, which although made it incredibly easy (plus my host did most of the work!), lord only knows how much random garbage has accumulated in all of those files over 8 + 4 years of shared history!

So I had planned on making the migration sort of a clean-up effort along the way and only copy over the guts of each WordPress install, leaving behind whatever caches and other nonsense have accumulated over the years.

And then terrible performance got even worse!!!

When it got to the point where it would literally take upwards of a minute to move from one page on my blog to another, and the admin pages would randomly get disconnected because they couldn’t touch base with the server when it was super overloaded, I knew that it was time to finally tackle this pig. So within a few hours time, I created a second VPS with my awesome web host and gradually let it go through all of the OS and app updates while I staged just one install – my multisite that contains my blog, Thing-a-Day, and about half a dozen other sites – and everything seemed to be going fine…

…until I switched my domain over to the new server…

…upon which usage started blowing up like crazy, again despite little traffic, and even though I started this new VPS a bit smaller than my main server (figuring I could upgrade once I’m ready to stop paying for the old one), it quickly became unusable just like the old machine had been.

From here I started doing some digging into WordPress itself because no longer could I point fingers at the 32-bit OS. I downloaded a couple of plugins – P3 Profiler and Query Monitor – and with the latter’s help, that’s when I noticed that apparently I had a plugin that was just GOING NUTS against MySQL day and night:

To walk you through this fun graph, the orange is your normal SELECT queries that happen when anyone hits a page and WordPress queries its database to build it; the purple, on the other hand, is somehow all INSERT queries, which should really only ever happen when I’m publishing a new post, with a few exceptions.

And those two big valleys in the purple that you see around the 18th and then again between 19 and 20? The first is when I had temporarily pointed my domain over to the new server; the second is keeping the domain back on the old server, but turning off the plugin in question … which apparently solves just about everything!

By the way, the last little purple sliver is me turning the plugin back on so that I can capture some logs to send over to the plugin’s developer to help him look for a fix…

because the thing is, I actually really like this plugin – it’s called WP Word Count and I’ve used it on just about all of my sites for years and years to summarize how much writing I’ve put into my projects. I love it, and if I can spare the time next year, I want to make use of its work to pull together a running total of word counts for all of my work combined so that I can ultimately put together a fun, little dashboard to help motivate me to keep putting words on the screen!

Luckily after finding another multisite user with the same issue, I left a quick comment expressing the same and got a reply from the plugin’s developer later on that evening, so it’s awesome that they’re actively interested in patching this bug because I’ve evaluated a lot of other options and honestly never really found ones that I liked better than theirs.

In the meantime it’ll have to stay off, though, as I continue with my fun server migration. During this whole effort, I’m also trying to really hone in on The Perfect VPS WordPress Configuration, so I’m doing other things like tinkering with Varnish and considering adding Nginx in front of Apache, and then eventually I also want to fine tune W3 Total Cache so that I have one reliable set of caching / compression / minifying rules to use for all of the different sites that I publish … because I figure if I’ve seriously been doing this publishing on the web-thing for over fifteen years now, my pages should be some of the fastest loading around!

Stay tuned for more as I struggle through more issues to come! Now if I can only get this stupid thing to post… 😛

I’m really frustrated with Verizon right now, which is tough because I’m absolutely a huge fan of my FiOS Internet service.

We’ve been customers since 2012 and without a doubt they provide the best Internet service available in the Tampa Bay area. I’ve done the research, I’ve priced out the competition, but between their pricing and the symmetrical download & upload speeds that are pretty much unheard of elsewhere, Verizon FiOS is the best.

So why have I spent the last couple of weeks feeling like an inferior customer over one that they could have sometime in the future???

I’ve talked a lot about upgrading my Internet speed lately – right now I’m at 75 Mbps, but I’ve really been eyeing their 150 Mbps package … it’s just that until recently, it was a bit out of my reach at an extra $50/month. So needless to say, I was really excited when I noticed one day when browsing my upgrade options and saw that they had a new promotion where I could not only go from 75 to 150 Mbps for only an extra $20/month, but they’d also throw in the $200 router upgrade for free!!!

It sounds too good to be true, and apparently it was because a couple of weeks ago when I was finally ready to pull the trigger, the 150 Mbps tier was mysteriously nowhere to be found…

Screen Shot 2015-10-23 at 7.47.38 PM

My first instinct was understanding enough – there must just be something wrong with Verizon’s website, so I got on the phone and called to place the order manually instead, but the rep who answered my call saw the same thing and was pretty clueless as to why there was a hole in my tiers where the missing 150 Mbps option used to be! It was frustrating to hear her shrug it off, not even giving me an option to escalate the issue for someone else to take another look.

It just wasn’t there, and she was ready to move on to her next call, but that’s not even where the story takes a dark turn.

So I hung up and instead tried reaching out via Twitter, where I got a slightly different, but equally misleading explanation…

This time they told me it was a “technical limitation” and that the tier must simply be “filled up,” so it was no longer available. Here I started to call bullshit because things really weren’t adding up … namely, they had the capacity to upgrade me to 4x or 6.5x my 75 Mbps speed, but not to only 2x my speed! 

And granted, I’m not a fiber technician, but I know a little about how math works – I even gave them the benefit of the doubt here and asked if it was really a technical limitation or if Verizon was artificially limiting availability of certain tiers to encourage the higher sales, but from there the tech just doubled down on that speed is popular, so it fills up and isn’t available anymore.

That didn’t make any sense, but in between waiting for responses I did a little more research and found what I thought was the missing key that would finally make somebody say, “Crap – that’s not right! We need to look into that!!!”

Opening up a separate browser and going to getfios.com, I was able to bring up a brand new order – even at this same address – for a new bundle including 150 Mbps Internet service…

Screen Shot 2015-10-23 at 11.18.37 PM

Huh???

So clearly there must be something wrong with their ordering system if a new order will offer me that tier, but when logged into my Verizon account it was nowhere to be found!

Well, after waiting a couple of days for a response from the social media team that never came, I decided to send an email to customer service to see what answer they’d be able to come up with for my issue. And at first it seemed promising because I was told that they needed to research it more before they could respond, but eventually they sent me this…

Thank you for choosing Verizon. I have received your email dated 10/29/2015 regarding that  want to know why a new customer would be able to get Fios Internet speeds of 100 and/or 150 Mbps while existing customers can not. I apologize for any frustration or inconvenience this has caused. My name is Karen, and I will be happy to assist you. I will also review the account to make sure you are getting the best value.

Thank ou [sic] for your interest in our products and service.

We apologize for the delay in our response and regret any inconvenience to you.

Unfortunately the connection speeds of 100 and 150 Mbps are not availble [sic] to you.

The decision to only offer the connection speeds of 100 and 150 Mbps was made at corporate management level. Unfortunately it has not been advised to us of why the decision was made to only offer the 100/150 Mbps to new customers and not to existing customers other than that there is technical limitation of upgrading the equipment for existing customers who already have Fios working at their location.

I’m very sorry for the inconvenience and frustration this will cause you and your family.

This after Verizon “added more versatility to its industry-leading service” by apparently adding a 100 Mbps tier in between 75 and 150 Mbps, according to this swell press release boasting about their latest promotions in my specific market a month before I was unable to order them myself!

According to this release, “Verizon is the only communications provider to offer a symmetrical speed tier of 100/100 Mbps, or any Internet services offering the same fast download and upload speeds, in the Florida market” … but only if you’re a brand new customer for them because if you’ve already got an account, your business isn’t worth the effort.

Seriously, how insulting is that?!

Here I am, a long-standing customer and very much a fan of the service, and I want to give Verizon more money, and if I had submitted my order two weeks earlier before this asinine decision was made, I could’ve! But now my extra $20/month isn’t good enough for Verizon. They’d be happy to sell me 300 Mbps service at an additional $110/month, but sorry, the next logical upgrade that makes sense for my account isn’t available because they’ve arbitrarily dog-eared that speed for new customers only.

What sense does that make? My next door neighbor could call and get 150 Mbps service installed tomorrow, or hell, my wife could call and apparently get it installed at our same address … as long as she sets up a new account because this account – the one that’s 3 years old and has earned Verizon upwards of $7,000 over the life of our service – isn’t eligible for an upgrade.

Sorry / not sorry.

You wouldn’t do that with HBO or Cinemax – “I’m sorry, I know that you’ve had an account for 3 years, but we’ve reserved those premium movie channels to entice potential sales from our new customers only. We regret any inconvenience that this causes you…”

Traditionally it’s a poor business practice when one of your loyal customers wants to give you more money and you arbitrarily refuse to take it, but apparently a fiber customer in the hand isn’t worth two in the bush when you’re Verizon.

But it’s not too hard to fix this! We schedule an appointment, you send out the technician who makes my dog bark for hours on end while he tinkers around outside, he installs a new ONT on the side of my house and gives me my sweet, new Quantum router, I start paying you an extra $20/month for the service I’ve quite literally been salivating for all summer long, and in the end we all win!

You get some extra money without having to sell me on the upgrade I already want and I get an even faster Internet speed to rub in the faces of everyone I know who isn’t lucky enough to live in a FiOS market … which admittedly is almost everybody I know.

Verizon, I love FiOS and I don’t want to fight with you. I just think it’s bullshit that you’re offering better deals to the new customers you don’t even have yet than you’ll give me who’s been here this whole time. I’ve come to accept that your best promotional pricing is for new customers and my bill jumped up a ways after my contract renewed, but this is service – to tell me that I can have one Internet speed but not another is just cruel. 

We can get through this, you and me, but honey, right now you’re being kind of an asshole. Please call me when you’re ready to grow up.

Digging deeper into server issues…

October 29, 2015 11:49pm
Tagged with:

I think I’m making progress, albeit in a number of avenues that I wasn’t necessarily expecting!

One thing standing out that is seeming to be more of a problem than I would’ve thought is that simply put – I’ve got a lot of WordPress installs on this server!!! Sorting through them all, I came up with something like 20 installs in total, which is amusing because I’ve only got like 10 domains currently registered … apparently I’ve built up a healthy list of test installs in addition to some really old ones that I never post to and thus don’t really think about.

Now I wouldn’t have thought this to be much of an issue until I was able to start digging into specific processes running slow along with their associated URLs and database queries, and it turns out that WordPress’s own cron function has been at least part of the source of my problems, for a couple of different reasons:

A) Across those 20 installs, a number of them weren’t up to date – some of them being so old that I had to update them manually (gasp!), and more prominently some outdated plugins that either also needed to be brought current or in some cases removed altogether for instances where I’m not even using them anymore (i.e. I used to heavily use a backup to Dropbox plugin, but I’ve since come to rely more on system-wide backups and I don’t even have Dropbox installed on my laptop today).

B) Also, I still need to learn more about how WP-Cron actually functions, but I think there may have been some cases where many sites were trying to all run at the same time, which is just a general recipe for disaster! From what I’ve read so far, it sounds like WP-Cron actually triggers based on page views … which confuses me how my test sites were even triggering … but one option here might be to disable WP-Cron and instead tie them into real cron jobs at the OS level so that I can scatter them throughout each hour instead of triggering arbitrarily.

I’m not entirely sold on just yet, but realizing that I have so many separate installs definitely reinforces my curiosity around WordPress multi-site, which I was playing around with earlier this summer and actually abandoned when I decided not to redesign some of my sites. But from a resource management perspective it still might make sense, even if I just try to pull some of the like-minded sites into one install, or possibly even a bunch of the test sites, just to help get the numbers down!

All in all it’s a work in progress, but so far my last high load notification was at 5:50 pm last night and I updated a bunch of my installs in the hours since, so hopefully I’m starting to work through the issues and load is at least going down a bit! Mind you, it doesn’t help that I don’t really know what’s a reasonable daily page view volume that my current server should be able to handle … and granted, now that I’ve started playing around with routing some of my traffic through Cloudflare, I’m sadly reminded about how much junk traffic a server gets that never even becomes legitimate pages served (like brute force attacks, scanning, etc…).

One other tool that I’ve found that’s been helpful specifically in pinpointing the cron issues has been New Relic, which is actually a probe that I had to install under PHP on the server itself but then in turn monitors all sorts of neat stats about processing times and whatnot. I’m just using the free Lite version they offer now and it’s already been enlightening – definitely worth checking out!

Screen Shot 2015-10-29 at 10.30.19 PM

A Look at Web Page Performance…

October 28, 2015 5:49pm
Tagged with:

I’ve been experimenting around with performance tuning on my web server the last couple of days to try and work out some bizarre, high usage issues when (unfortunately) in reality none of my sites really garner that much traffic to warrant the spikes that I’ve been seeing.

Some of it is common sense stuff like troubleshooting slow-performing WordPress plugins – for example, apparently W3 Total Cache was dragging down my response times even though it wasn’t active at the time, which lead me to reinstalling and then actually setting it up correctly because I think I disabled it a few months ago out of sheer frustration.

I also made some tweaks to my Apache/PHP build, thus resulting in my having to rebuild no less than a dozen times last night each time I’d find a new option that I could only enable by rebuilding! So if for some reason you found one of my sites down last night, that would be the reason why… 😛

In the midst of all of this, I’ve also been trying a number of different web page speed tests to try and gauge my progress through the whole mess – Google PageSpeed Insights is usually my go-to for general tuning, but I also like WebPageTest.org because it gives me waterfall graphs of every single element that needs to be loaded, and I also recently discovered GTmetrix, which is cool because it will run several tests at once and gives you the option to see results for multiple sites side-by-side … something I normally have to do in separate windows.

Anywho, one of the views that I found interesting from WebPageTest.org is where they breakdown the actual requests and bytes coming from each individual domain because obviously the lower # for either of those stats, the faster your page will load. Below is what Just Laugh’s homepage looks like…

Screen Shot 2015-10-28 at 4.50.58 PM

What’s interesting here is that really only a select few of these domains are related to actual content – primarily justlaugh.com and then wp.com because our images use WordPress.com’s CDN via the Jetpack plugin.

All of the Facebook hits are for the single Like box in the footer, and the same with Twitter.

We also have a single ad widget for Amazon, along with a couple of Google Adsense units, and then we use both Google Analytics and WordPress Jetpack for analytics.

So really, totals breakdown something like this…

  • Content – 75 requests for 509k
  • Social Media – 51 requests for 667k
  • Advertising – 51 requests for 634k
  • Analytics – 10 requests for 18k
  • GRAND TOTAL – 187 requests for 1,828k

Now don’t get me wrong – there’s certainly value that comes from each of those other three sources otherwise I wouldn’t use them in the first place, but it still says something interesting about web content in our times when social media & advertising tie-ins together make up more than double the actual real content that a website has to offer to drive those other things in the first place! And before you say that it’s really kind of my fault that the breakdown is like this because I designed the site, I would add that really, Just Laugh is extraordinarily conservative with regards to advertising compared to other media sites that still use pop-ups and wraparounds and those god-awful Taboola ads that currently infect 80% of the web today.

Of course, the real exercise here is simply how to improve on these numbers, which is tough because most of these requests are still measured in milliseconds and many are done in parallel. The whole page currently takes right around 10 seconds to render, which in some ways seems terrible but in comparison with sites like CNN and The Onion it’s actually about right in the middle.

Could I shave off a couple of seconds by eliminating the Facebook and Twitter widgets, or possibly even the Amazon widget???

Possibly, but would the savings really be worthwhile in the bigger picture when gaining Facebook and Twitter followers is also a worthwhile goal???

Clearly I have no idea, but it’s always fun to have real, actual data to wade through to consider things like that!

On the other hand, at least I can say for a fact that my caching is now working correctly because for repeat views, all of those 3rd party requests are pretty much the only things still getting reloaded… 😀

Screen Shot 2015-10-28 at 5.48.08 PM

Speed Testing via the Linux Command Line

October 7, 2015 4:49pm
Tagged with:

Last night I relocated a bunch of computer stuff – namely my home server and router – to our bedroom closet, which in a positive way got it more up and out of the way so that we only have to listen to fans spinning when we’re picking out clothes to wear, but in a not-so-positive way, it means that at least until I climb into the ceiling to run ethernet cables around the house, my rig in my office will be relying on wifi instead of a wired network connection for a while.

Now this didn’t really seem like much of a big deal until this morning I noticed that Verizon dropped its prices on the higher Internet tiers and now upgrading to 150 Mbps is only an extra $20 instead of $50!

And mind you, I don’t necessarily need most of that speed here at my desktop, but I am somewhat addicted to speed tests just to randomly remind me how awesome my Internet connection is these days, and not for nothing but speed tests over wifi kind of suck.

That said, my home server is still hard wired because it’s literally sitting right next to the router in the closet now, so a bit of quick Googling found this nifty post that provides a great walkthrough of how to run speed tests directly from the command line in your friendly, neighborhood linux box…

I already had Git installed, so it was maybe 30 seconds to pull down the speedtest-cli script and copy it into /usr/local/bin, then I was off to the races! I’m pretty much a sworn user of Speedtest.net, so to see that it was interfaced directly with them was an easy win. And the customization is neat, too, how you can either run in a default for the fastest host or choose your own, in addition to getting the link for your results badge to wear so proudly.

My favorite feature, though, is how simple they made batch testing so now you can actually pick multiple locations around the world and kick them all off in rapid succession. Though normally I default to my web host up in New Jersey because I think testing with a local server here is stupid when we don’t really have a lot of data centers here for major websites anyways, they were admittedly running a little slow this morning so it was neat to be able to also throw in LA and Miami as two other corners of the country to help round my test results out!

Screen Shot 2015-10-07 at 4.39.30 PM

Now to see if I can find that promo where they were giving away the free router to upgrade to 150/150… 😛

Envisioning My Automated Home…

August 30, 2015 1:21am
Tagged with:

So I spent some time reviewing my home server/network setup as it stands so far and it got me thinking about what might be the next steps on down the road.

I’m fairly happy with my media server, and aside from squeezing in maybe one more hard drive to satiate demand, it’s pretty much as far as I can take it until I can drop a couple of grand into expanding to new rack-mount hardware and a separate, high-end NAS for storage.

Backups are good, too, as all of my most important files (writing, pictures, tax and finance stuff) are triple backed up between a local backup and two independent cloud destinations, and just tonight I’m finally looping my wife’s devices into the schema so that the bajillion photos of Christopher that she takes will be safe and secure, too! 😉

So what’s next???

At first I started thinking about trying to automate our Christmas light display outside, though I’m not sure what kind of costs are reasonable on that front. I’ve seen a few setups where people just setup controls to flip the individual strands on and off, though I’m not sure how safe that is for your standard, residential Christmas lights that one buys at Home Depot.

I also briefly researched the idea of going the landscape lighting route because it’s probably more durable for the task, and I found this custom LED system that looks really freaking sweetbut the fixtures alone are about $100 a piece … I’m kind of afraid to ask how much the controller that runs everything is!

Maybe some day… 

Then there’s your more traditional automated home offerings – security system, cameras, thermostat, etc… – and although I really have no idea what I want at this point, maybe it would be something fun to tinker with until I both figure that out and hit the lottery to be able to fund it all!

I figure I’ve got a couple of years to get there, anyways, because I honestly see this house that we’re in right now as more of a test house, at least from this regard. Our goal in the next five years is to be able to built our dream house where we’ll ideally spend the rest of our days, so that’s where we’ll want to splurge on all of these kinds of bells and whistles, but just like our home server has been resurrected and grown so far this year, it’s still fun to experiment and play around with what I can get my hands on in the meantime until we build up to that point of dropping thousands of dollars on network-connected fixtures and wiring the entire house to best fit our modern, connected lifestyles! 😀

In the meantime, I can still see a more immediate need to at least hardwire connectivity to the rooms where Plex gets used, and I’m thinking we might splurge and upgrade the FiOS to that 150 Mbps package they offer before long … because I just learned that apparently they’ve got a promo giving the $200 router we need to upgrade to away for free with the upgrades right now!

We’ll see – maybe come Christmastime I’ll start tinkering with a single network camera or controlling the star on the Christmas tree via computer … gotta start somewhere.

After adding a 4th hard drive to my home server today, bringing the total storage space up to an unexpected 20 TB, I’ve been thinking a lot about backups and what my dwindling options are as this beast continues to grow even larger and awesomer than I would’ve ever expected only six short months ago…

I’m definitely well past the it’s just TV and movies” phase and am now finding myself much more in the “I love this thing, and it would be a huge pain in the ass to replace!” phase instead! The trouble I’m gauging, though, is how to effectively manage a backup that big without spending a small fortune or driving myself absolutely insane!

Ironically, my original plan when I bought the two 4 TB drives to start this project was that one was supposed to be a backup of the other, however by the time I started getting my hands wet, not only had I concocted a plan to fill both drives that I already had, but I swiftly had another 6 TB on the way to give me “some wiggle room,” too.

Well, now that said wiggle room has flown the coop and I coincidentally just added another backup-less drive to my existing server, I’m starting to think that I need to reassess my options … first and foremost, because there’s literally not any space left in the case I have today for more drives to backup to anyways… 😛

Online backups are pretty much out, and note that I’m only talking about media server backups here – documents, photos, etc… are now triple backed up (something I need to write about one day) – so vital stuff is absolutely safe. It’s just my collection of Marvel movies and twenty-some-odd seasons of The Simpsons that I’m concerned about here today!

Anyways, online backups are out primarily for two reasons:

  • Cost – A real backup service like Amazon S3 would be at least $200/month for 20 TB of data, even at their cheapest Amazon Glacier prices, and I’d be hesitant to push the luck of any of the “unlimited plans” that folks like Crashplan offer with that volume of data.
  • Sensitivity – Let’s see, how do I put this gently??? I may have discovered while I was ripping my DVD collection that it was far more timely to just download copies of them off the Internet, and so even though I know that I have a huge crate of discs in my garage that justify such a huge library of technically illegal content, it’s not exactly something I’d entrust to a 3rd party who might be legally inclined to disagree… 😉

This pretty much limits me to local backups, which sucks because Florida gets hurricanes and whatnot occasionally, but there’s not much I can do.

*note: much…

So when looking at local backups, my latest plan up until this afternoon was going to be to essentially build a backup server that would be identical to my media server, except that its entire job would be to occasionally wake up, make copies of everything on the primary server, and then immediately go back to sleep.

This plan made sense until I started roughing out costs in my head for the next generation of my media server because realistically, my next expansion will need to put it into a proper rack mounted case, along with I’d also like to throw some beefier hardware at it like a dual-chip motherboard, multiple gigabit LAN card with bonded channels, and a RAID setup for the drives for better throughput and redundancy.

That alone is going to be expensive because I’ve never done RAID before and I’m realizing that I’ll need enough drives to build the entire array at once to have someplace to migrate my data over to, but aside from a healthy splurge, it’s still not terrible … until I realize that I have to double everything if I want to run a backup server alongside of it like I had originally been planning! 🙁

That all said, excessive flashy lights aside that a rack full of noisy hard drives will create, I had an interesting idea today that might change most of that for the better and that’s this … why do I need to have a live backup at all for data that’s almost NEVER changing???

Again, these are movies and TV shows, not working documents or even photos that I’m editing, so once I’ve got a ripped season of The Simpsons from Blu-Ray, that’s pretty much it until they re-release everything again on hypercube or holographic projection or whatever new-fangled media to get me to buy 25 seasons of cartoons all over again they come up with next!

So why not, I thought brilliantly while scrubbing myself clean in the shower, just take everything that’s static – pretty much every movie, and all of the older TV seasons – and just stick them on hard drives, and then put the hard drives in a waterproof case to throw in the closet or wherever?

It would save on moving parts because they’re literally only going to get written to a handful of times, it sort of works in that whole elements thing if they’re kept someplace safe, and … I think it kind of makes sense for a lower cost solution without all of the bells and whistles that frankly are kind of frivolous anyways…

I’d still need a system to keep track of what’s been offloaded to the drives for when it comes time to fill a new one, but I’d guess that software probably already exists for tape backups that could be used. I guess I’d have to test them every couple of years just to make sure that they’re still alive, but that could be part of adding new data to the collection which I’m sure would be a manual/annual effort or something, anyways.

I realized as I was completing some seasons of shows that I’d had a long time ago, but lost to a hard drive failure that really once they’re completed, they’re not going anywhere at that point, so maybe I’ve been overthinking this whole massive backup situation when instead I can just drop the one-time cost for another set of drives and a hefty, padded case to store them in and just be done with it!

Problem solved – now can I go back to playing in my data? 😉

So close! Then oops…

February 6, 2015 3:44pm
Tagged with:

I thought I’d take a quick break right now to do what should’ve been a simple maintenance on my new media server. My latest hard drive added was still setup as an external USB drive, so now that I’ve got the cables that I needed I figured that I’d simply power everything down, move the drive into the case, and that’d be that!

Oops.

When I booted back up, instead of the OS seeing a 4 TB disk that’s about half full, somehow it managed to see a 500 GB partition that it thought was my old drive and a 3.5 TB partition that was completely foreign to it. 😛

A quick Googling suggested that the USB adapter that I have probably has something in the controller that serves as a boot manager, so whereas I thought I was being all smart and formatted it to ext4 a week ago in preparation for today’s task, in effect using the USB adapter to do so kind of hosed up that little plan something fierce!

So here we are now…

usb_oops

(USB adapter plugged in while hard drive is still mounted inside case!)

Thankfully I’ve still got just barely enough free space between my other three drives to split up all of the data that’s on the “external” one, so now I get to wait another umpteen hours to copy everything over to those drives, only to as soon as it’s done swap out the cables one more time, reformat, and copy everything back again!!!

screenshot

😐

A Different Kind of Cloud

January 21, 2015 1:45am
Tagged with:

I just saw this commercial and now I’m wondering – did we change the definition of what The Cloud really is, or have I just not known what it was all along???

The product is basically an external hard drive that’s accessible over the Internet – neat idea for the everyday user, but is anything connected to the Internet considered part of “The Cloud” nowadays?

I always pictured cloud storage to be online, distributed storage … redundant, in big data centers … in general, more reliable than just a consumer hard drive sitting underneath my desk. Backing up my family photos and important documents to the cloud means that if I have a fire, or a power surge, or somebody breaks in and steals my computer, all of those files are still safe.

Google Drive, Microsoft OneDrive, Dropbox – those are cloud services.

Maybe if it did something neat like auto-syncing to a real data center for free online backups – that would be the cloud, but this … this is just a network-shared hard drive where the Internet happens to be your network. 😛

© 1999 - 2017 Comedic-Genius Media, All Rights Reserved.