How To Avoid The Expensive Route To Becoming A Cloud-Native Enterprise

When big business moved to the cloud, the impetus was for it to be pretty modest, originally: reduce costs. Slowly, but surely, it’s now seen more and more at board level to increase competitiveness and business effectiveness, becoming a platform for doing more—not just the same cheaper, but doing things you couldn’t do before. And of course, that also means learning new ways to do that to exploit the opportunities in front of you. 

That has sparked the creation of what many CIOs now known as the data layer—work that needs to be done to ensure that data, which worked fine on big monolithic SQL databases in the old datacentre, makes a successful transition to this new promised land, and which typically starts as a move to virtualisation, then containers, and eventually adoption of a microservices deployment style.

NoSQL was turned to–but unfortunately proved not up to the task

In my experience, retail organisations tended to be at the forefront of all this. Why? Because they’re the sort of people who need to do lots of different things for lots and lots of customers in a very competitive industry. As a result, many embraced a cloud-native development approach very early, investing big in it because they tended to be, of course, organisations with huge data centres and therefore the most to gain from moving to the cloud and digital ways of working.

But what they have found is that the problem is not the applications using data in the cloud, but the database supplying it… so while the world of application development and delivery has come on leaps and bounds, the world of the database has been held back here because the monolithic relational databases that were running the whole business just aren’t very good at working in the cloud (to make something of an understatement!).

Hence the emergence of the idea of a data layer that can deal with distributed transactions, and that can also support other kinds of workloads. I have worked with two separate organisations that had faced just these problems, but for (interestingly) different reasons—and who both found distributed SQL database to be their best way out of them. (Both happen to be large North American retailers, too, but with very different business models and approaches to the market.)

The first example is where NoSQL had been turned to, but unfortunately proved not up to the task in hand—and which resulted in a lot of complexity, corruption of data, and end-user line of business dissatisfaction. This is a brand that operates a huge e-commerce site featuring thousands of sellers/millions of products, all backed up under the hood with a wide array of applications for handling large volumes of data and to manage how products get listed and presented to give the user a nicely personalised, seamless buying experience. 

That ‘wide array of applications’ had been instantiated as a clutch of microservices powered by a cloud-native technology stack exploiting several well-known open-source data and NoSQL systems, all deployed in a multi-datacentre topology using a multi-cloud deployment strategy. So, all well, and this was an excellent internal team with the skills, experience and technology resources to deliver cloud-native applications rapidly… but a major pain point hampering all its good work here: inconsistency in the product catalogue. 

What does that mean? Essentially, a lack of confidence that what happened at one end of a sale would be properly reflected at the other: a big problem when you have high volumes of updates arriving, and them all needing to be reflected consistently and both sides of the ledger to always balance! The cost of inconsistency was considerable and soon, in the eyes of the business, unacceptable in terms of a) the danger of user dissatisfaction b) the risk for mismanagement of orders and c) the building cost of fixing all this while live (a potential issue for both end customers but also partner sellers, remember). The ultimate cause of all this was all too clear: the NoSQL database being used was just not robust enough for the enterprise level of transaction consistency needed at this scale. 

Instead, the team identified a need for a totally different database approach that could deliver strong consistency, at scale, offer high availability and strong resilience, and support its multi-datacentre/multi-cloud approach. I’m delighted to say that my company’s technology was the eventual winner here, but it’s more important to see why than us try to do any point-scoring: distributed SQL was the only way the developers could see to eradicate the inconsistency issue plaguing their carefully built cloud business platform. (Not to get too technical, but this was determined by two proofs of concepts, one on making sure a unique product catalogue identifier approach worked and mapping service that, again, properly supported lookups across various key identifiers in different services and systems.)

As a result, that consistency bugbear has been totally quashed, and this user is now enjoying very high-volume ACID (atomicity, consistency, isolation and durability)-transactional consistency across a multi-region deployment on multi-table transactions, as well as the elimination of product catalogue inconsistencies and the ending of any remediation work to restore consistency. 

Sub-10 millisecond performance–and no need for any kind of expensive database transaction management

I mentioned above that there were two examples of customers I’ve seen faced with the cloud business database problem: let’s now talk about the second. In this instance, despite a lot of investment into cloud-native digital business infrastructure, the team couldn’t escape single-site data layer implementations without a lot of sharding or replication that introduced complexity, cost and what the team leader expressed with some frustration to me in an email as “brittleness”.

The context here is that, again, the company had made the commitment to move to a microservices architecture to make it easier to deliver incremental improvements without risk of disruption to its existing heritage systems and processes. But it felt it couldn’t fully pull the trigger and move to its new model until it was sure it had the right database ready to plug in—one capable of (again) being able to support distributed ACID transactions, scalability, multi-region active-active deployments, multi-API support, and automatic data sharding. And I’m delighted to say that our software met these requirements, and as a result, the company has turned on its cloud-native back end, and, even better, is seeing sub-10 millisecond performance, and no need at all for any kind of expensive and fiddly database transaction management.

What’s the takeaway out of these two different, but in many ways parallel, data layer problem experiences? Both were big, superbly competent retailers with big datacentres and all the rest of it… but both recognised in their separate ways that with the advent of cloud, not only would new ways of working with data be needed, so would very careful evaluation of what these new technologies could do for them in the could light of the transactional, data layer ‘day’. But perhaps more critically: also, that you just can’t run any kind of risk of losing the database transactional consistency as part of a move to cloud—and unfortunately that the first generation of data products just couldn’t deliver that, but luckily all that has finally started to change.

Martin Gaffney, Vice President, EMEA, Yugabyte, the leader in open source distributed SQL databases, was appointed in early 2021 to lead and grow Yugabyte’s EMEA operation. Previously, Gaffney had been involved in building successful EMEA operations at high-growth companies across the technology sector. He founded the EMEA operation of ThoughtSpot Inc in 2015 and helped grow the company to a circa $2billion (USD) valuation during his tenure.

More recently he was Regional Sales Director, EMEA, at a leading AI technology company, and earlier in his career, was an executive at the EMEA operations of Sequent Computer Systems, Tivoli Systems and Netezza Corporation, the last three listed of which were acquired by IBM. Martin was also co-founder of Volantis Systems, now part of Pegasystems. During his career, he was awarded runner-up in the Ernst & Young Entrepreneur of the Year Awards.

AI Readiness - Harnessing the Power of Data and AI


Related articles

Is Poor Collaboration Hurting Your Team’s Productivity?

Regardless of how many days you believe hybrid workers...

AI Decodes Human Behaviour Like Never Before

In this exciting episode of "The AI Show," we...

The AI Show – Episode 7 – Martin Taylor

In this episode of the AI Show, host Emily...

Three ways to protect your business from cyberattacks

Keeping on top of cyberattacks in this current digital...

Data Sovereignty in AI: Five Key Considerations

Data is the most valuable economic asset for modern...

Subscribe to our Newsletter