Sign in to my dashboard Create an account
Menu

Who cares about backup?

Chris Gondek
Chris “Gonzo” Gondek
191 views

who-cares-about-data.jpg
Realistically? Not many folks. Backup is almost always seen as an afterthought, as an insurance policy that you only concern yourself with when it is time to make a claim, or groan when the premium (software license) renewal comes around. Makes me think of Chris Rock’s stand-up routine, where he says it shouldn’t even be called insurance, it should be called “in case sh*t” as in, in case sh*t happens, I get the money. But he also goes on to say, “now if sh*t don’t happen, shouldn’t I get my money back?” We all know that’s not how insurance really works, but we have the same sentiments towards backup.

Why do most of us think of backup in this way? It’s not an exciting topic I know, more of a necessary evil in most people’s view. It’s because backup is difficult, costly, time consuming, risky, and lately seems to be only getting more complicated.

What we do care about, is recovery. This is the claim part of the insurance, and just like in a real life accident you want the recovery experience to be as easy and painless as possible. I have been a data protection professional for over 20 years and I still hear the term “Backup service level” There is no such thing as a backup service level, there are only Recovery Service Levels (based on Recovery point and time objectives).  This is not to be confused with Disaster Recovery, or High availability, those are measured differently and are separate data discipline conversations that all roll up into a business continuity strategy. We’ll talk about those another in another blog. However, recovery depends on the backup being done properly in the first place, so we all need to care about it, but why?

In fact some would argue that the question shouldn’t be who cares about backup, by WHY are we still backing up? Wasn’t going to the cloud and moving to Software-as-a-Service solutions going to stop that pain once and for all? Well no. In fact, if anything, it’s really just given us more dynamic workloads and data in multiple different environments and ecosystems to protect, exacerbating the problems of backup. The fact is, that as long as we have data, we will always need to protect it with a backup. Just like with our homes, I know more people that have home and contents insurance, than security cameras, bars on their windows, and back to base alarm systems. More of us prepare for when, not if we experience some sort of loss, it’s the same in business when it comes to prevention (permitter security technologies) vs recovery (home and contents insurance).

So, why do we still backup? There are (really) only two reasons –

1. Because we want to recover from data loss scenarios

If this was the only reason we had to backup data, we would only ever keep our most recent backups as recovery points. Data is mission critical to all business functions these days as more and more operations are digitised, especially in the financial and communications side for most businesses globally. These data loss scenarios come in many shapes and forms and will constantly evolve. (To learn more about this, see my most recent blog, The Last Real Threat to Data) We backup because want to provide some assurance of recovery of data.

2. Because we have to

One would argue that this should be the first reason, but realistically, not all workloads are mandated to be protected. In fact, it’s for this very reason that some workloads are now mandated to be protected, mostly ever since some countries mandated data to be protected for compliance. This became predominantly popular in the early 2000’s after some high-profile scandals which now require most businesses globally to demonstrate they can provide (specifically in this case financial) data from long periods of time. This becomes a conversation about retention. This also (usually and historically) is the higher spend portion of backup, as the longer you need to retain data, the more cost, risk, lock-in, maintenance you face, but it also means the more data you end up keeping.

OK, so we know we have to still backup, even in the cloud, even with SaaS solutions, like I have said in the past, the cloud won’t do our mess for less, and there is a shared responsibility model we must all adhere to.

So then what is a backup anyway? Most cynical folks will tell you it’s a “glorified copy” of data. And that is effectively true, we make a copy so that we can restore from that copy, simple right? The problem is that we have an unstoppable, insatiable need for more data in more places than ever before. This brings the laws of physics into the equation as data is getting bigger, it’s increasingly difficult to make that glorified copy in a reasonable amount of time (we call this a backup window, or schedule). In fact, it became impossible to do with “streaming” technologies and thus the “snapshot” was born. A snapshot creates a “point-in-time” copy of data within a storage system. These can be done very quickly and, in most instances, very efficiently, allowing a very fast, if not instant “recovery point”.

So why not just take snapshots forever and just have those as recovery points? In an ideal world, this could potentially satisfy all our backup needs. The problem is that not all data is created equal. These recovery points are ideal for reason 1 (to do a recovery in case of a data loss scenario) as we want our most recent data back fast. However, as data ages, it loses this value, so it needs to be “tiered off” into a lower cost, and typically lower performance retention tier.

This is where reason 2 (backup data because you have to keep it until I said so) comes into play. In backup terminology, we usually refer to longer retention copies as secondary and tertiary copies. Most of this data is rarely recovered for data loss reasons, typically this would be for audit purposes or niche environments like research and / or analytics. Ideally you want the best of both worlds, super fast backups taken quickly, efficiently and easily, then, some of them moved or copied to another destination where they can be kept for future access. More and more these days, object storage is becoming the more popular target for the longer term retention copy, displacing tape mediums and it offers a longer data durability.

This approach not only meets the two reasons we need to backup, it also satisfies a popular backup rule known as the 3-2-1 rule (3 copies of the data, 2 different media types, 1 offsite backup) which also help to ensure data recoverability. Seems like a snapshot and a bucket could really deliver everything we need? If only there was an extremely efficient way to move the necessary data from the snapshot to the bucket, and then wrap a nice user experience around it all, that is easy to setup, automates the process and schedule, and intuitively provides us a way to easily navigate and recover our data ourselves……

We here at NetApp have been delivering this capability for many years, consistently pivotal in platinum recovery SLA conversations due to our industry leading storage, snapshot and replication technologies. We have a strong alliance community that leverages these technologies. When it comes to providing this for our On premises and multiple cloud based workloads, we wanted to bring all of the best of our capabilities to further enhance our user experience of data services across a hybrid multi cloud when it comes to backup by providing the NetApp Cloud Backup Service. Cloud Backup Service is “activated” in our cloud manager, and immediately begins to take fast efficient snapshots of data, then using very sophisticated NetApp snap technologies we call “SnapDifferential and SnapMirror to cloud” we move only changed block data to a cloud storage destination or object storage destination on premises.

This world backup day (March 31st) I encourage you to re-think your data protection strategy and have a new, fresh perspective on backup. It can be easier, faster and more cost effective and less of a necessary evil.

To understand more of what I’ve been talking about, check out my “Introducing NetApp Cloud Backup Service in 60 seconds or less” video below.

Chris “Gonzo” Gondek

My mission is to enable data champions everywhere. I have always been very passionate about technology with a career spanning over two decades, specializing in Data and Information Management, Storage, High Availability and Disaster Recovery Solutions including Virtualization and Cloud Computing.

I have a long history with Data solutions, having gained global experience in the United Kingdom and Australia where I was involved in creating Technology and Business solutions for some of Europe and APAC’s largest and most complex IT environments.

An industry thought leader and passionate technology evangelist, frequently blogging all things Data and active in the technology community speaking at high profile events such as Gartner Symposium, IDC events, AWS summits and Microsoft Ignite to name a few. Translating business value from technology and demystifying complex concepts into easy to consume and procure solutions. A proven, highly skilled and internationally experienced sales engineer and enterprise architect having worked for leading technology vendors, I have collected experiences and developed skills over almost all Enterprise platforms, operating systems, databases and applications.

View all Posts by Chris “Gonzo” Gondek
Drift chat loading