The Network Risks for Online Gaming

By Frank Puranik, Product Director for iTrinegy

I’ve spent decades looking at networks and their impact on application delivery in different industries and none more so than network/online gaming. The global casino/gaming industry was purported to have a net worth of over $125 billion last year, and with online gaming taking an increasingly significant piece of the pie it’s essential that their users are happy if they are to continue to grow revenue.

Networks are an essential part of the online gaming world and with all the different hosting offerings available e.g. Cloud and virtualised data centre choices it can be tricky to deliver a good online game experience to users. To do this requires 3 things:

….with […] Cloud and virtualised data centre choices it can be tricky to deliver a good online game experience to users.

Software that’s nice to use, efficient and looks good
Servers that have enough capacity for peak demand, without going slow and, responsive Network delivery to the users’ preferred device

Technically we’d put all of these factors together and call it the users ‘Quality of Experience’ (QoE) with the servers and networks providing the essential delivery of the game, which we’d measure as Quality of Service (QoS). The server infrastructure has ability to be strictly controlled: cpu power provisioning, virtualization level, users per virtual server etc. But, the network, on the other-hand, poses a real challenge, as we cannot control it: in online gaming, for example, we just have to take what’s out there (a mix of home networks, corporate, mobile networks etc) and from your personal experience you know how variable and poor these networks can be at times.

For the right commercial reasons the gaming industry hasn’t helped itself technically, setting up data centres in offshore locations for tax reasons which have often had poor network access. This is changing due to new (UK) legislation to tax bets where the consumer is located, and no longer where the servers or the gaming corporations are based. This now adds the prospect of being able to move data centres to ‘better’ locations, but the definition of ‘better’ itself may be cost driven, now freeing gaming companies to look at solutions located in places such as Iceland – with its abundant geo-thermal energy and therefore low energy prices – without again being primarily focused on the network access.

…the network, on the other-hand, poses a real challenge, as we cannot control it…

The reason for this is a constant open reporting on how bandwidth is cheap and getting cheaper, which leads the non-savvy to the assumption that the network can fundamentally be ignored because, like the server structure, if you run out just add more… Unlike the server infrastructure, however, this is absolutely not true! Bandwidth is not always abundant to all locations, it is not inexpensive to all locations and even more, it may come with large amounts of latency, packet loss and other network related issues, which to the player appears as the dreaded LAG!

LAG is not to be trifled with. Apart from an uninspiring, poorly crafted game, LAG is the single most prevalent reason that a user will leave the game (either temporarily or permanently). Even if you have purchased and controlled the best MPLS/Private network into your datacentre there are still issues surrounding the final delivery to the end user – the last mile i.e. adsl, the corporate network (QoS policies), mobile cellular networks, poor wifi etc. and, in addition the actual distance to the data centre e.g. Iceland. All these create delay and network latency which translates directly to LAG.

So what can we do?

LAG is the single most prevalent reason that a user will leave the game (either temporarily or permanently)…

Better Game Design, Development and Testing

Firstly, when building the game the design must tolerate these (unpredictable) real world networks, and cope with the vagaries of the last mile. This means, we need, ideally to have those networks available to us through the development and testing process of the game.

The problem here is that often the developers have access only to fixed and typically excellent networks (great WANs or LANs), because the development environment is highly controlled, and so writing the game to tolerate poor networks is not done. The test environment will be quite similar, perhaps with a few different network scenarios available.

Network Emulation has the answer here providing an ability to create different networks on-demand for any different last mile networks types (such as mobile, WiFi, ADSL with high latency, packet loss characteristics, etc) as well as being to able to simulate the MPLS/WAN into the data centre(s).

Our company iTrinegy has a proven role to play here, one such recent example was by emulating real world network conditions for a graphics-intensive multi-player online game. Whilst all the testers were physically sat next to each other, in the virtual world we created the network conditions that put them all over the globe connected through all sorts of networks and networking technologies.

Network Emulation has the answer here providing an ability to create different networks on-demand for any different last mile networks types…

Monitoring and Measuring the Network Experience

Secondly we must, where possible, measure and control those networks ensuring that they do not become overloaded as running out of bandwidth invariably creates additional latency and LAG. Major events need special consideration as we need to cope with peak load, not just the normal loads.

To do this you need to be measuring the game performance across the network on a constant ongoing basis to look for increasing delays, loss of game data (causing communications to be repeated) and out of bandwidth problems (which lead to queuing). All of these ultimately translate to game LAG.

We also need to be looking at all of these for future capacity and ongoing game performance over the networks (good old fashioned capacity planning – but this time with a strong network bias). Furthermore we need to be looking at ‘headroom’ for the big event as well as trends in increased network usage (likely caused by more players) which is just good common sense (capacity planning). The data for this is made available by an Application Aware Network Profiling tool.

Datacenter and Cloud Moves (Transformation)

This risk can be removed by emulating the new network in advance and testing the game play in the emulated network to ensure that the game still performs at least as well as or better, as in the original infrastructure.
Lastly, if moving a data centre is on the radar, then we really need to re-test and (re-certify the game) to ensure that it will play as well or as better from the new data centre, with the new networks to that location. The game is going to have to cope with a network that may have more latency due to its physical location related to users, or simply because the networks to those locations are not quite as good. This risk can be removed by emulating the new network in advance and testing the game play in the emulated network to ensure that the game still performs at least as well as or better, as in the original infrastructure. (It should be noted that this is the standard unwritten data centre SLA – the Application should perform at least as well after any transformation as it did before – the big joke being that most don’t know how it performed before the transformation – hence measure first!). Either way successfully measuring and emulating will de-risk any type of datacentre, or cloud move.

In summary this implies the following process:

1) measurement of the game’s performance today (through Network Profiling)

2) accurate prediction (via Network Emulation) and

3) Network Profiling in the new network to verify that all is as expected.

Outsourcing your infrastructure and network, does not outsource your responsibility to provide a good gaming experience…
Don’t think these points don’t apply if you are outsourcing or moving to a cloud provider. Outsourcing your infrastructure and network, does not outsource your responsibility to provide a good gaming experience – In the end the buck stops with you not the outsourcer.

All of these technical points mentioned above are fundamentally all focused on providing a good quality of service which translates to winning and keeping more customers, by giving customers the best possible experience. Not doing this risks your customer retention and therefore your bottom line.

Frank Puranik is Product Director for iTrinegy a company specialising in the performance of application across networks, with products and services around Network Profiling, Network Emulation and Application Performance Management for all types of networks.

+ posts

CIF Presents TWF – Andrew Grill

Newsletter

Related articles

6 Ways Businesses Can Boost Their Cloud Security Resilience

The rise in cloud-based cyberattacks continues to climb as...

Good, Bad and the Ugly of Cybersecurity GenAI

As the cyber threat landscape continues to evolve at...

Maximising the business value of data

In today's volatile economic and geopolitical climate, companies must...

The cloud: a viable option for data storage

Cloud-first strategies have become commonplace across many industries. In...

Emerging trends in Cloud, DevOps and Governance

The cloud landscape has an immense impact on how...

Subscribe to our Newsletter