It’s no wonder that an ever growing number of organisations are investing in virtualised, automated data centres and private clouds, considering the greater agility and better security offered compared to traditional client-server architectures or public clouds. 

Despite the storage elements and server in the private cloud being largely automated, it is still likely that the network is provisioned and configured manually. To take full advantage of the various benefits that a private cloud can offer, organisations need to prioritise investing in scalable, automated network control. This will help avoid any deployments being delayed due to the legacy process.

[easy-tweet tweet=”Organisations need to prioritise investing in scalable, automated network control” user=”abarirani” hashtags=”cloud”]

There are a series of phases that any IT department will go through as its private cloud infrastructure matures. The first of these stages is to pilot projects. At this stage, the IT team will take advantage of any non-critical applications and workloads, and use these to test out the cloud’s infrastructure and design. This allows the IT department to gain both experience and make any necessary changes ahead of moving onto business-critical workloads. ip expo

Once confident in their cloud’s design, they can ascend to the second phase, “production”, when one or a small number of business-critical workloads are moved onto the cloud. This allows the IT department further opportunity to make any necessary changes to the private cloud before fully rolling out the initiative. 

The final phase sees the large scale-out of the private cloud. The transition will involve moving to geographically-dispersed private cloud environments in multiple data centres, and may also involve multi-vendor cloud platforms. 

Stumbling blocks

No matter what the size or scope of a project, if the deployment of the private cloud is not in sync throughout the entire process, the operation can prove too risky for a business.

We commonly see the server team handling the virtualisation component, but all network aspects may be tackled by a completely different team.

The disparate groups that deal with the private cloud provide a major challenge to any project. We commonly see the server team handling the virtualisation component, but all network aspects may be tackled by a completely different team. A possible result of this can be a lack of visibility for the network team into the virtual machine (VM) resources as they’re created and destroyed. This can make it difficult to track and manage any massive spikes in the first instance.

There is little point in networking teams even trying to comply with audit and security policies if they don’t have this visibility, as they won’t have access to any accurate information regarding which DNS records and IP addresses are assigned to which VMs at any given time. There are numerous factors, including applications, locations and users, which need to be tracked for VMs as well as networks, DNS zones, and IP addresses.

Server admins may well have access to this information, but it’s likely the networking teams will not. And by still using manual methods to react to the creation and deletion of VMs, their responses will often be slow. 

Time is money

Speed is an important factor to be considered, as a private cloud is only as fast as its slowest component. Due consideration of the core network services must be given when building a private cloud, such as assigning IP addresses and DNS records so that VMs can be easily commissioned and decommissioned in a sheer matter of moments.

[easy-tweet tweet=”Rapid delivery is a conditional promise of private #cloud.” user=”abarirani and @comparethecloud”]

Rapid delivery is a conditional promise of private cloud. The task of manually provisioning DNS records and IP addresses in a virtual environment can severely inhibit the rate of delivery by hours or even days. This process can be inaccurate and inefficient, and may result in a virtual wasteland of unused IP addresses and DNS records. Moreover, just a few small keystroke errors could lead to potential IP address conflicts, resulting in significant downtime in the private cloud environment.

Unreliable DDI (DNS, DHCP and IP address management services) can pose a significant threat to any organisation with the risk of a potentially costly network outage. And the risks extend beyond the network itself. If IP addresses of VMs are being used for billing internal “customers,” then the manual processes may even lead to inaccurate charges, resulting in either lost revenue or disgruntled customers. 

The importance of highly available DDI services in providing scalability and resilience cannot be underestimated for private clouds running critical workloads, or those spanning several geographical locations. And, looking forward, any limits to the scalability of a network could prevent deploying any additional tenants and VMs required to meet the demands of an organisation’s growth.

Don’t take your chances – automate

Success in deploying a private cloud is largely dependent on an organisation both understanding and giving urgent consideration to all critical factors

Success in deploying a private cloud is largely dependent on an organisation both understanding and giving urgent consideration to all critical factors, such as those mentioned above. Hinging its approach on principles of automation, visibility and integration can help an organisation take effective control of its private cloud deployment. 

In the beacons of successful private cloud deployments, we frequently see the management of storage and compute as being heavily automated, supporting the agile delivery of low-cost services to lines of business, resulting in tangible benefits delivered across the organisation.

Previous articleIs a cloud or on-premise model preferable for UC?
Next articleWhat is a blockchain? Building trust in bitcoin
Arya Barirani, VP, Product Marketing at Infoblox 

Arya Barirani is Vice President of Product Marketing at Infoblox, where he is responsible for global go-to-market and marketing strategy as well as product and company positioning and messaging. 

An 18-year veteran of business-to-business technology marketing, Barirani’s programs have a proven track record of success in driving growth and capturing market share.  His work spans the entire spectrum of marketing activity – from content and brand to demand generation. 

Barirani has held marketing leadership positions at Symantec, Hewlett Packard, Mercury Interactive (acquired by HP), Veritas Software (acquired by Symantec), and Computer Associates International.

Barirani lives in Northern California with his wife and two children, and holds a Bachelor’s degree in Computer Science from the University of Iowa.

Tweet Arya at: @abarirani