Benchmarking Cloud Server Performance and Comparing IaaS Providers

By Robert Jenkins, Chief Executive Officer of CloudSigma

It’s 2013 and there are plenty of IaaS Cloud Server / Cloud Hosting service providers to choose from in the market, and the logical approach to provider selection is to try and evaluate potential suppliers by performance, service and price.

Price is the probably the easiest metric to compare providers on but how do you know you’re really comparing apples with apples? In terms of provider’s capability and the service aspects they subscribe to, you can get an excellent high-level picture from comparison tools such as that offered by ComparetheCloud.net.

So once you’ve found a selection of providers that provide the type of product and the level of service you need, how do you then determine which of these is offering the best value in terms of price and performance?

Essentially you now have two avenues available to you in order to dig deeper: You can either sign-up to trials from your short-list of providers and test the performance of your own applications on each cloud yourself – or – you can refer to benchmarking or performance reports generated by a third party. (Or indeed, you can do a combination of both.)

At this point, let me deliver you an early concluding statement to this blog:

There is absolutely no substitute to testing a service yourself with your own applications against your own performance criteria!

Whilst testing numerous clouds for your best fit may be the acid test, in the real world it’s also very time and resource intensive. So if there is a way to narrow down your search to just two or three providers (who you can then test with), it makes sense to explore that option first. And that’s where 3rd party benchmarking and performance reports can play their part.

There is absolutely no substitute to testing a service yourself with your own applications against your own performance criteria!

So what is benchmarking good for, and what do you need to be wary of?

Benchmarking

The purpose of benchmarking is primarily undertaken to establish a common, independent reference point between multiple options for fair comparison. Benchmarking methodology and granularity has improved dramatically in last few years but challenges remain. These include many built in bias’ from selection bias to location bias and more. It really is a challenge.

Benchmarking is as much about determining variability in performance levels as it is about absolute performance levels.

Benchmarking should factor in price. You want the maximum actual computing performance possible for a given amount of money or vice versa– not just simplistic quantity ‘indicators’ such as the amount of resources your subscription level provides you with. 3GHz of CPU from provider A is not the same as 3GHz of CPU from provider B in delivered workload terms.

The best benchmarking will take the server instance through a full life cycle (“the whole process”) including creation of compute and storage each time making each test iteration an independent sample; performance is about consistency as much as it is about actual level. It’s a bit like ordering a taxi and not knowing what will show up on any given day – a Lada or an S-Class Mercedes. Full life cycle testing also ensures your cloud provider can’t “cheat the system”.

Benchmarking needs to be done over time in a repeated fashion. Why? Because a sample of one isn’t representative! You need to keep sampling over time particularly when you are talking about public clouds where your location within the infrastructure will change with each sample test.

Furthermore, benchmarking needs to assess both Compute, Storage AND Network performance to get the full picture, not just one or the other in isolation.

Other specific use case tests are important: performance for different operating systems, CPU options, database workloads, web servers, video Encoding Frame rates etc. Look for what’s relevant to you.

Also of importance is from where and how the benchmark tests are conducted – data centre, network level or end-user browser level. This can greatly affect results for things like external networking throughput and responsiveness.

The impact of geography on measurements should not be underestimated in skewing network performance results.

Beware of nonsensical reports that get produced such as ‘global views’ of best performing clouds which only make any sense if you serve the globe from only one location!

A good example of a company leading the revolution in benchmarking around cloud providers is Cloud Spectator. By implementing a strict testing region and combining this with specific use case based tests, the results they are producing are extremely relevant and insightful for customers. On the networking side, Cedexis have built an impressive data gathering network that can show real performance levels of public clouds as measured by billions of data points on load speed, error rates and more taken from within end user browsers. Such results are directly relevant to anyone running public facing web based services.

A sample report from Cloud Spectator on CloudSigma versus selected IaaS competitors

So what does benchmarking itself not do so well? A good example is that it doesn’t test ultimate capacity or scalability – particularly burst scalability in bandwidth utilisation, or testing of the provider’s ability to rapidly scale up storage. Some things are too hard to test in simulated scenarios with the required degree of reliability and consistency.

Benchmarking doesn’t take into account the price differences between providers which are the other side of the coin. A true comparison needs to look at price adjusted performance levels. Some providers also expose the ability to scale computing resources independently as benchmarking needs to be an exact server profile match between all providers. If someone is offering resource bundles then each measured provider has to match the same bundle specification, whether all that resource is needed or not. This masks the over-provisioning that more restrictive platforms impose on their customers.

Conclusion

The right type of benchmarking report or service can provide a valuable insight into general performance levels. You should focus on the performance criteria and metrics that are important and relevant to you and your application. Benchmarking reports should never replace actual testing, but represent an opportunity to scientifically filter down your list of provider to find those which are in the right ball park for what you need. Then it’s a case of testing each provider in real world conditions – Test, Test and Test again!

+ posts

CIF Presents TWF – Professor Sue Black

Newsletter

Related articles

Navigating Data Governance

AI transformation has become the strategic priority de jour...

10 Best Marketing Tools to Leverage Business Growth

The use of marketing tools is imperative in this...

Three key approaches to safeguarding modern application security

More than a decade ago, Marc Andreeson famously declared...

AI Show – Episode 5 – Matt Rebeiro

Navigating the Diverse Applications of AI in Marketing The 5th...

The importance of channel partners in driving forward Zero Trust

Once coined a ‘buzzword’, there has been a positive...

Subscribe to our Newsletter