How Cloud Storage Helps Companies Move From Hot to Cold

Low cost cloud backup solutions are currently transforming disaster recovery (DR) and data protection. The door has now opened for small businesses to adopt previously prohibitively expensive secondary DR capabilities.

Providing businesses with flexibility over stored data, the cloud also offers a quick way to recover information after a disaster occurs, with the flexibility that it can be recovered to any location of choice. By virtue of being more dynamic, automated and accessible from any internet-connected location, organisations with cloud storage are believed to be up to twice as likely to be able to recover data after a disaster in four hours or less than those who have non-cloud solutions.

[easy-tweet tweet=”Block storage is still the traditional option for enterprise workloads” user=”LeClairTech” usehashtags=”no”]

The majority of cloud providers now offer block storage and object storage. For example, Amazon offers Amazon Elastic Block Store (EBS) and Amazon Simple Storage Service (S3). Block storage is still the traditional option for enterprise workloads that require persistent storage.  Block storage is used by applications in the same way as they would consume storage from a storage area network (SAN) device.  Files are split into uniformly sized blocks of data, but have no metadata coupled with them.

Object storage stores data and files as distinct units without any hierarchy. Each object contains data, an address and any associated metadata useful to the application that is using the object storage. Object storage is highly scalable as well as long-lasting, making it greatly advantageous. Most object storage systems have mechanisms to duplicate data and give up to 11 nines of durability and availability.

Object storage can be effectively used for DR, archiving and cost-efficient cloud backup

Object storage can be effectively used for DR, archiving and cost-efficient cloud backup. Data stored in object storage is easily and quickly accessible. The main limitation and challenge of using cloud storage for backup and recovery is in the initial seeding of data into the cloud.  Furthermore, a typical data centre has many terabytes of data on-site and migrating this to the cloud can be a time consuming and costly task. Even with a high transfer rate wide area network (WAN) pipeline, it can take days to transfer this large amount of data.  And even if you have the budget and patience to get your data to the cloud initially, when having to restore from the cloud, WAN bandwidth can be a limiting factor and you can greatly lengthen RPO times. As a result, many cloud services offer cloud seeding programs that allow enterprises to ship physical disk or tapes to the cloud facility so data can be seeded into the cloud for the initial set of data. From that point on, any changed or new data can be transferred incrementally over the WAN. This incremental eternal backup strategy requires much less data to be sent over narrow and costly transfer pipes.

[easy-tweet tweet=”Incremental eternal #backup strategy requires much less #data to be sent over narrow and costly transfer pipes” via=”no” usehashtags=”no”]

Cold Storage

Most data in a data centre cools over time.  This means that it starts out “hot” and is accessed frequently. However, over time data is needed less often and cools to the point where it is cold, accessed very infrequently and predominantly kept as archives for data retention or compliance needs.  Cold data must be retained in many enterprises for seven-to-10 years for regulatory reasons, but with the expectation that it is accessed extremely infrequently. Historically, this data has been saved on tapes and stored off-site. However, while individual tapes are inexpensive, tape is a much maligned technology in IT. Tapes often require expensive hardware systems that must be maintained, and the process for storing data for long-term retention using tapes is labour-intensive and potentially error-prone. Worst of all, as tapes age and bit-rot occurs, data restoration can fail entirely.

As a result of this data pattern, there is a new form of cloud storage that is heating up. In 2012, Amazon introduced Amazon Glacier, which provides durable storage for data archiving and online backup for as little as one cent per GB per month. Glacier was designed specifically for data that is accessed infrequently, which can be tolerant of a retrieval time measured in hours, typically three-to-five hours, as opposed to seconds or milliseconds, which is typical with object storage.

In March 2015, Google entered the cold storage market with a potentially disruptive solution of its own. Like Amazon Glacier, Google Nearline Storage is also intended for infrequently accessed data and has a cost of one cent per GB, less than half of standard storage costs at Google. However, unlike Amazon Glacier, which will deliver your data within a service level agreement (SLA) of three-to-five hours, Google promises file access of three seconds or less “time to first byte.”

[easy-tweet tweet=”Cold storage can be a significantly more flexible and cost-effective tape replacement for many businesses” via=”no” usehashtags=”no”]

Cold storage can be a significantly more flexible and cost-effective tape replacement for many businesses that no longer want to use rotational media for archival and retention. It seems like Google Nearline beats Amazon Glacier, offering faster retrieval times at a much lower cost than other cloud-based object storage options. However, there is a catch: Google Nearline is still designed for cold storage. Google states that you should expect 4MB throughput per TB of data stored, although this throughput scales linearly with increased storage consumption. For example, 3TB of data would guarantee 12MB per second of throughput. Therefore, while Google Nearline is faster to first byte than Glacier, if you have a lot of data, Glacier may be the faster option overall. Both Glacier and Nearline can incur additional costs as you access your data depending on how frequently and how much of your data you need to access.

Many hyper-scale cloud companies including Amazon, Google, IBM and Microsoft, now offer these services

Cold storage and object storage are both good systems for backups and archives. Many hyper-scale cloud companies including Amazon, Google, IBM and Microsoft, now offer these services. With a multitude of options, organisations should consider their own needs before jumping in and using a particular provider. Companies need to assess each system’s initial data seeding requirements, cost per GB, time to first byte retrieval, ease of use, frequency of access to data and overall throughput expectations. The best solution for individual businesses to do is seek a continuity and backup provider that supports the widest range of options, giving flexibility and a cost-efficient way to protect your data in this changing market.

+ posts

CIF Presents TWF – Andrew Grill

Newsletter

Related articles

6 Ways Businesses Can Boost Their Cloud Security Resilience

The rise in cloud-based cyberattacks continues to climb as...

Good, Bad and the Ugly of Cybersecurity GenAI

As the cyber threat landscape continues to evolve at...

Maximising the business value of data

In today's volatile economic and geopolitical climate, companies must...

The cloud: a viable option for data storage

Cloud-first strategies have become commonplace across many industries. In...

Emerging trends in Cloud, DevOps and Governance

The cloud landscape has an immense impact on how...

Subscribe to our Newsletter