Cloud computing remains a hot trend in the IT world, having transformed the way businesses think of their IT infrastructure. The cloud often offers reduced costs and business benefits, including increased agility across the board.

For many organisations, it is important to be able to provide reliable network and application services at all times, even during non-business hours. However, the perception is that the cloud is not reliable and that downtime, upgrades and maintenance windows are the norm. The reality is that through utilising the right tools and processes, businesses can make cloud uptime truly work for them.

businesses can make cloud uptime truly work for them

Livin’ it uptime

IT organisations are going to be held accountable for keeping high uptime levels for many applications that are being moved to the cloud. When properly planned, a workload running in a cloud environment can be more resilient and meet higher availability requirements than an on-site alternative.

[easy-tweet tweet=”There remains certain scepticism around performance and uptime in #cloud” user=”comparethecloud” usehashtags=”no”]

While some people might look at clouds gathering in the summer sky with pessimism, the same is also true with the cloud itself. There remains certain scepticism around performance and uptime in the cloud, which stems from historical outages. No system is perfect, the cloud can fail too. Part of the problem is that when Amazon or other major cloud providers have a small outage, it is big news. Trade news, however, does not pick up the hundreds of outages happening every day in corporate data centers.

When cloud computing was still relatively new, there were significant performance considerations particularly for database systems, the heart of most applications, as well as data transfer speeds and a lack of maturity in how to architect applications to scale on the cloud.

The industry has gone to great lengths when it comes to advancing and guaranteeing performance.

The leading cloud service providers offer high powered, as well as workload-specific, high speed storage with guaranteed input / output operations per second (IOPS), and have also made auto scaling easy, among other improvements. These advancements make the cloud much more reliable than the UK’s summer weather.

The formula to achieve uptime in the cloud starts with understanding what is the expected and guaranteed uptime for each piece of infrastructure from your Cloud Service Provider (CSP). Service Level Agreements (SLAs) outline what organisations can expect from their cloud provider from a legal perspective and need to be analysed to understand the details and conditions behind the SLA to trigger.


Don’t believe me just watch

Making the cloud work for your business requires understanding the resources and options each CSP offers and understanding the architectural recommendations and best practices.

For new applications, cloud architecture suggest using distributed applications that can be deployed in clusters of disposable servers with a ‘cookie cutter’ image. This model allows quick scaling to handle increased loads, and reduces troubleshooting to eliminating a server and creating a new one from the original image.

The cloud provides IT with the opportunity to easily set up mirrors of production systems, in active-active or active-passive configurations. Some CSPs provide greater peace of mind with data replicas in different data centres or continents. Setting up database replication can be as simple as checking a box.

[easy-tweet tweet=”Dynamic DNS can eliminate the load balancer itself as a single point of failure” via=”no” hashtags=”cloud”]

It’s a good idea to replicate essential data; this can be done to a private site or to be stored by a different cloud provider. With a well-planned deployment and a good infrastructure, companies can efficiently load-balance their IT environment between multiple active, cloud-based, sites – CSPs provide load balancers, tools to replicate entire workloads to a different region in a few clicks. Dynamic DNS can eliminate the load balancer itself as a single point of failure. So, if one site should go down – users would seamlessly be transferred to the next available connection.

Cloud can also provide replication solutions for storage with efficient cloud-based backup. Organisations can have a dedicated link to a cloud-based data centre, where engineers are able to safely backup and restore from the cloud at a low cost per Gb. For organisations that are restricted by data retention policies, the cloud provides a retrievable data solution which can be reviewed as needed.

Cloud services make it relatively simple to create a versatile environment for organisations which helps to optimise uptime and resilience. Cloud solutions provide a good forecast with greater amounts of recoverability and business continuity while the current technology continues to grow and develop. With cloud, organisations can adopt a flexible growth plan capable of scaling with IT infrastructure data demands. With cloud, uptime funk can give it to you in the form of a reliable IT infrastructure solution with minimal down time.

Previous articleSD-WAN is here to stay?
Next articleBlack Friday & Beyond: Dealing with web traffic spikes this festive season
Gerardo Dada, Vice President of Product Marketing, SolarWinds

Gerardo Dada is Vice President of Product Marketing for SolarWinds’ systems, database and security business globally. Gerardo is a technologist who has been at the center of the Web, mobile, social and cloud revolutions at companies like Rackspace, Microsoft, Motorola, Vignette and Bazaarvoice. He has been involved with database technologies from dBase and BTrieve to SQL Server, NoSQL and DBaaS in the cloud.