Discover The Final Way to Cut Cloud Costs

Remember when they told you that you needed this shiny new thing called ‘cloud?’ That it was silly to pay for more computing resources than you needed most of the time, just because it was required to cope with traffic and demand peaks (like Christmas) a couple of times a year?

In fairness, this was true—cloud is perfect for scaling up and scaling down and it was (and is) a better way to even out your computing resourcing.

But, they didn’t tell you that this was only one sort of scaling—scaling the application up and down to serve varying demands. If you want more people connecting to your service, doing more transactions and creating more data, you need a bigger database at the back end to do this.

‘Scale’ doesn’t necessarily mean petabytes of data; very few CIOs have applications requiring that much data at run time. What we’re talking about is the number of concurrent active users rather than storage.

If hundreds of thousands of people connect at the same time, you need a database capable of managing that—it’s a processing issue, not a storage one.

Ten years ago—and often still now, to be honest—we worked around this by buying lots of little application servers to run those connections. But, to make it happen they all connected back to one big Oracle or DB2 style box somewhere in a data centre. Back then, that Oracle database was bigger than it needed to be because of Black Friday, or Singles Day in China. And it still is.

In other words, the problem we thought we’d cracked didn’t go away. Monolithic databases are still being bodged to make cloud business work.

Why do I say ‘bodged’? Because these engines were just not built for the cloud and the unique way it works with data. You have kept paying for proprietary software, running on specialist hardware.

Scale your database capability up and down as you scale your applications

This set-up is limiting the positive impact of cloud on your overall budget. It’s a cost you’ve kept having to carry, and it also means you’re not exploiting all the capability and innovation the cloud offers.

Cloud has been fantastic for one part of the scaling challenge, application scaling—but that’s really only possible by using as big a single database as you can power up.

The uncomfortable truth about using cloud to scale a business process is that you’ve always had to use traditional, monolithic SQL databases to make it fulfil that promise of smoothing out your IT peaks and troughs. This was pricey enough before; now it’s getting out of hand.

How can you solve this challenge and make business savings? One option is to adopt a cloud native database which allows you to scale your database capability up and down in the same way you scale your applications. Instead of a single, giant machine and expensive proprietary licences, you could have three smaller ones, perhaps in different geographies. Then, if one suffers an outage, you’d fail over to the other two, keeping up and running without any business (or customer) impact. It’s also cheaper. Running a big transactional app in the cloud might require renting a 32, 48, or even 64 core server… but, two 8 core virtual machines are cheaper than renting a 16 core machine, etc.

In fact, even if you don’t need to scale up immediately, it quickly becomes cheaper to have a number of small machines cooperating to carry your workload, rather than one big machine.

Why isn’t this the norm, though?

Because we left the cloud revolution unfinished. We have done so much – moving capital expenditure into operating expense, lifting and shifting workloads (then finally redesigning and optimising them). Today, the application layer of most organisations is much healthier than it was, as we rapidly moved from virtual machines to Docker containers, and now in many instances Kubernetes pods. All this has made building applications and distributing them resiliently around the world more straightforward and cost effective.

However, database evolution has lagged behind and is the final step we need to take to complete the cloud revolution. It’s a step we might call enabling, not vertical, but horizontal scaling.

Transactional consistency, and the ability to horizontally ‘scale out’

A good example of this is a major US retailer we worked with. Though a very successful bricks ‘n mortar brand name, the company was moving aggressively into the e-retail space even before Covid accelerated the importance of a strong online store. It was doing all the right things with microservices and modern application development techniques and technologies, but the database side posed a problem.

Even the biggest database they could build, on the biggest virtual machine they could rent in the cloud, could not handle their load.

To work around this, its engineers had to do messy fixes like sharding, which started to get complicated and raised the spectre of transactional inconsistency. This meant that one shopper might ask for something that the system had just placed in another customer’s shopping basket elsewhere.

It got very messy. Their solution was to adopt a cloud native database that gave them both transactional consistency and the ability to horizontally scale out. And it worked! The company now benefits from sub-10 millisecond performance, boosted operational simplicity and efficiency, and crucially, can now easily meet massive eCommerce peaks.

To complete the cloud revolution and achieve the horizontal scaling that you want, it’s worth considering distributing your business data and sharing the load among lots of machines rather than pouring all your money and hopes into one database Leviathan. The answer is a scalable, resilient open-source-based PostgreSQL database (like YugabyteDB).

This kind of technology is available now and easy to install. It allows you to finally achieve the time and cost savings promised by the cloud all those years ago.

Website | + posts

Martin Gaffney, Vice President, EMEA, Yugabyte, the leading PostgreSQL-compatible distributed database company, was appointed in early 2021 to lead and grow the EMEA operation. Previously, Gaffney had been involved in building successful EMEA operations at high-growth companies across the technology sector. He founded the EMEA operation of ThoughtSpot Inc in 2015 and helped grow the company to a circa $2billion (USD) valuation during his tenure.

More recently he was Regional Sales Director, EMEA, H2O.ai, and earlier in his career, was an executive at the EMEA operations of Sequent Computer Systems, Tivoli Systems and Netezza Corporation, the last three listed of which were acquired by IBM. Martin was also co-founder of Volantis Systems, now part of Pegasystems. During his career, he was awarded runner-up in the Ernst & Young Entrepreneur of the Year Awards.

CIF Presents TWF – Professor Sue Black

Newsletter

Related articles

Navigating Data Governance

AI transformation has become the strategic priority de jour...

10 Best Marketing Tools to Leverage Business Growth

The use of marketing tools is imperative in this...

Three key approaches to safeguarding modern application security

More than a decade ago, Marc Andreeson famously declared...

AI Show – Episode 5 – Matt Rebeiro

Navigating the Diverse Applications of AI in Marketing The 5th...

The importance of channel partners in driving forward Zero Trust

Once coined a ‘buzzword’, there has been a positive...

Subscribe to our Newsletter