Currently, the preferred method of data protection of cloud giants, such as Google, is to replicate the data across different locations (i.e. data centers), rather than performing a true back up. This is done because a true back up seems logistically too complicated given the amount of data these giants store. These companies have turned to replicated data because, the assumed risk of all replications crashing simultaneously are extremely slim. This risk assumption may be accurate but it does not take into account unintentional data destruction.

From firsthand experience, we know that unintentional data destruction is real and may become one of prevalent issues with the cloud. There is no accounting for human error. Maybe a developer writes a bad query or deletes some data. This mistake replicates across all of the locations that store your data. For example, say you wanted to delete all users that equal “John Basso” but you accidentally wrote a command to delete all users greater than John Basso. In the one second it takes for you to realize what you’ve done, the mistake has been replicated across all of your systems. Without a backup, there is no way to restore data because there is no undo button when it comes to system operations.

In our (sometimes hasty) move to the Cloud, we’ve assumed that data redundancy is the same as data backup. Redundancy protects against physical failures but it doesn’t protect against people. Because of the Death of IT, a lot of the system admin tasks are being passed to developers who by trade, by culture, and by experience, aren’t as familiar with all the rigor and nuances involved in backing up data.

What now? You can either confirm that you have backups of your critical data or acknowledge that you don’t and you’re putting yourself at risk.

 

john_01 button_twitter button_linkedin