Organisations must always be prepared for a disaster scenario. Whether it’s a cyber-attack, a natural disaster or a system failure, companies of all sizes need to formulate disaster recovery plans that will allow them to continue to operate when something goes wrong.
The traditional approach to disaster recovery, particularly for large enterprises, has been to build and maintain a separate datacenter at a safe distance from the primary datacenter site, maintained with expensive high-speed WAN connections between the two locations. The doubled cost of maintaining a fully operable secondary datacenter just for DR was never really viable for companies with smaller IT budgets.
Another more economical choice is employing a backup solution to ensure data is not lost. The backup solutions employed by most enterprises involve shipping incremental daily backups to off-site storage. This approach leads to recovery times and recovery point objectives measured in hours or days.
But backup solutions, as we know them, are built on an archaic principle of scheduled backup windows and lengthy, productivity-halting restore sessions. And waiting hours or even days for availability is simply unacceptable in modern enterprise standards.
The problem with the backup and restore approach is that it is error-prone, slow, and tends to fail when needed the most. The most common comment from companies that have been hit by a major data loss incident such as a ransomware attack, is that they realised only in retrospect that their backups were not as good as they thought. This caught them off-guard with a recovery process that was way more complicated and time-consuming than they had imagined.
The takeaway from this is that in a world of rampant cyber-crime, ransomware, and state sponsored attacks, legacy backup solutions are woefully inadequate at keeping data secure. Organisations need to shift their emphasis from backup and restore to intelligent storage solutions that are self-defending. A new security strategy is needed, one that is based on three major innovations in data storage: Continuous data protection, instant disaster recovery, and proactive disaster prevention.
Continuous data protection
Periodic backups, based on backup windows, are being phased out for modern storage solutions that provide continuous data protection in the cloud. Continuous data protection is a technique that continuously captures changes to data in real-time and streams these changes to a cloud-based, immutable repository. This means a file in use can be changed and that change is immediately captured and stored. This is in contrast to traditional backup techniques, which typically only capture data once or twice per day. The benefits of having continuous data protection are that it allows for recovery from data loss much more quickly and easily, and it allows recovery to any point in time. For example, if an organisation’s data was damaged by ransomware, it can simply revert to a very recent previous version, minimising the amount of lost work.
Instant disaster recovery
Instant disaster recovery solutions replace lengthy restore sessions with the ability to immediately roll back any quantity of affected data, on a file-level granularity, to any previous point in time. This removes the need to take systems offline for lengthy periods of time in order to restore them. How can we instantly recover terabytes or petabytes of data from the cloud? The trick is to perform the recovery process in the background, while users already have full access to the data. Whenever a user accesses an already-recovered portion of the data, it is immediately accessible, and whenever a user accesses an unrecovered portion, the data is recovered from the cloud in real-time. This process can be completely transparent to users, who will not notice any downtime.
Proactive disaster prevention
The best way to protect data is to prevent disasters from happening in the first place. Next-generation storage solutions are beginning to use artificial intelligence and data-based insights while combining enterprise grade anti virus and threat protection to monitor data and identify potential threats. This means that data is protected before a disaster strikes, it is easy to identify the scope of the lost or damaged data, and the recovery process becomes quick and easy.
Additionally, adopting a zero-trust architecture stance where every user, device, or endpoint that attempts to connect to the network must be authenticated before gaining access is a beneficial layer to add to the proactive disaster prevention toolkit. Over the past two years, traditional, office-based work models are no longer, and that has led to a workforce that is distributed across home offices and remote offices. At the same time, we see sophisticated malicious actors taking advantage of weakly secured edge locations. The new distributed topology of work makes traditional solutions, that rely on location to determine security posture, mostly irrelevant. That is why zero-trust architectures, based on the notion of “never trust, always verify” are now absolutely essential.
Multiple layers of security should be implemented to prevent not only intrusions but lateral movement of attackers within the corporate perimeter.
A new era of intelligent data protection
It is the end of backup as we know it, and the beginning of a new era of intelligent data protection. As an industry, we should rise to the challenge, and deliver storage solutions that meet the needs of the modern enterprise. Storage solutions that proactively protect their precious contents from ransomware and keep their businesses running in the face of disaster. That is the future of data storage, and that is what enterprises need and deserve.