The public cloud has the potential to offer businesses of all sizes much in terms of speed, flexibility, scalability, cost reduction, ease of maintenance and improved security.

However, getting it right is anything but trivial.

Not only is the starting point different from one IT department to the next in terms of what is virtualised and what it is running on, but so is the potential endgame in terms of which cloud service provider (CSP) or combination thereof is the best fit. Even within the realms of one cloud service provider, making the right selection that matches demand with the best available supply is difficult, given the complexity of their catalogues.

As cloud costs rise in a dramatic and seemingly unmanageable fashion for most companies, we hear far too often that the main cloud service providers were so easy to get into, yet so difficult to get out of. And then there’s the “what stays on-prem and what goes into the cloud?” discussion to be had. Followed by “what catalogue selection gives the best match from a resource requirement perspective to ensure the best performance at the lowest cost?”

The reality is that IT never truly becomes simplified, just different. A callous outsider might suggest that this is how everyone is kept in a job, but the truth is simply that demands and requirements change as available options evolve. As public cloud reshapes how much of technology is delivered, how the IT department is structured, what skills it needs and what technologies and tools it requires, have to change in step.

For organisations that have moved all or part of their business to the cloud, it is all too easy to assume the cloud will prove both optimal for simplifying day-to-day administration and in terms of cost savings. But this would be a dangerous assumption to make. As cloud service providers offer such a vast array of options, selecting the right resources for applications can be complex, time-consuming and costly.

Considering the main drivers for businesses going to public cloud in the first instance, this can potentially be self-defeating. More to the point, this is an on-going and not a one-time challenge, as application demand evolves and available options change. This is a complex challenge that requires continuous adjustment and optimisation at a scale which is beyond the reasonable capabilities of cloud ops teams, attempting to address it with spreadsheets and basic tooling.

There are of course tools available to assist with the basic differentiation between CSP offerings and some that make basic recommendations. However, these are ultimately advisory and lacking in deep analytics that such a complex problem requires. With the rapid growth in public cloud adoption, a new breed of vendors is addressing the limitations of currently available tools with technologies that automate the selection and management of cloud resources with the use of analytics and machine learning. Organisations such as Densify are now able to analyse the precise requirements of every aspect of compute resource and the available cloud-based options to make the correct selection every time. In this way, under or over-engineering is avoided, while providing better-performing applications at the lowest cost possible.

Additionally, many organisations are increasingly using Infrastructure as Code (IaC) tools such as Terraform by HashiCorp to simplify and automate the process of managing and provisioning infrastructure.  These tools provide a simple yet powerful way to request cloud infrastructure at the code level, but require developers to hard-code resource requirements or instance selections. This is problematic for a number of reasons. First, developers generally have to guess at what resources to code in, as they simply do not know what the appropriate values to use are. Second, this often leads to increased cost as there is a tendency to over-specify resource allocations to compensate for the lack of confidence of what an application will actually need. Third, this can lead to performance issues, as the wrong type or family of instances are specified. Finally, hard coding resource requirements that are wrong is restrictive, as efforts to correct the selection by op teams are thwarted as apps revert to the hard-coded values when that code is run again.

A better approach is to extend the Continuous Integration/Continuous Development (CI/CD) framework with Continuous Optimisation, replacing these hardcoded entries with the selection of infrastructure based on actual application demands on a continuous basis. Optimisation-as-Code (OaC) which goes nicely hand-in-hand with IaC fully automates the selection, placement and ongoing optimisation of workloads in the cloud.

For some businesses, the goal is to run purely on public cloud. However, many organisations want (or have to) adopt a hybrid model to meet their business needs. Cloud management and automation tools must, therefore, fully support a hybrid environment and enable their users to make the appropriate selection on what to run where and to drive automation and optimisation across multiple platforms.

Machine learning-driven cloud automation, migration and management technologies can truly unlock the potential of the cloud, by providing enterprises with fundamental benefits: assisting IT in getting the right application performance, reliability and agility and rewarding the CFO in terms of the cost savings.

+ posts

CIF Presents TWF – Ems Lord

Newsletter

Related articles

The Future of Marketing: Automation vs Innovation

Does AI Understand Your Brand Voice? AI is dropping jaws...

AI Act – New Rules, Same Task

The first law for AI was approved this month...

Time to Ditch Traditional Tools for Cloud Security

Reliance on cloud technologies has significantly expanded the attack...

AI Show – Episode 3 – Guy Murphy

In this third episode of The AI Show! Host...

6 Ways Businesses Can Boost Their Cloud Security Resilience

The rise in cloud-based cyberattacks continues to climb as...

Subscribe to our Newsletter