Top Five Considerations for Migrating to Colocation

When it comes to the physical assets that the company runs on, all organisations grow, evolve, and at some point hit the moment where they need to tackle the โ€˜buy or buildโ€™ dilemma.

Their critical systems and infrastructure run the show, so itโ€™s worth considering how best to deploy them for the long term. Business has to debate the pros and cons of whether to build and maintain data centres themselves or to outsource them to a partner, putting a lot of trusts externally.

When it comes to data centres, this means either building and maintaining on your premises or renting space from a specialist colocation provider. One key advantage of moving from a proprietary data centre with a limited geographic footprint is that a colocation provider often can access multiple and geographically-diverse centres. This can improve the backup and disaster-recovery preparedness through the provision of primary and secondary locations. Where an organisation provides online services, this can be thought of as the insurance that ensures customers will never see a loss of service due to a single point of failure.

Factors such as how to define needs, identify the right provider, and negotiate the minutiae of actually migrating to the new space all should be critically assessed. Once the contract is signed it will be time-consuming, complex, expensive, and disruptive to move again.

Here are five data centre infrastructure management (DCIM) considerations when it comes to migrating to colocations from on-premise.

LOCATION (PHYSICAL AND STAFFING)

Like scoping out any real estate, location the top consideration. Regarding colocation provider, this means both physical and the support locations.

The location has an inordinately large impact on the security and well-being of data centre assets. Weather patterns, seismic histories and accessibility to critical infrastructures such as roads and airports need consideration. Industries with stringent regulatory compliance may be prohibited from storing customer data across borders.

The same principle applies to support staff. Whether your staff or colocation manpower, you need to know how the assets are staffed. Check on possible outsourcing by the provider.

POWER SUPPLY

On a macro level, consider the robustness of regional power grid infrastructure and redundancy capabilities. Look for the location of power stations, substations and feeds to the facility as well as redundancy throughout the delivery system. Ensure no constraints will hamper operation in the area. Research recent local outages and the time to-repair track record to prepare contingency plans.

On a micro level, consider power monitoring within the space. Do they have metered power to precisely quantify and bill on use, with the agility to let you grow or decrease power draw? Do they have a way to detect, monitor and mitigate power abnormalities? What are their backup and disaster recovery plans when power disruptions occur within the colocation facilities?

COOLING

After power, proper cooling is indispensable in the colocation space. Power usage effectiveness (PUE) rating is crucial in optimising cooling costs and effectiveness. PUE shows how much overhead is associated with the delivery of power to the rack.

Ideally, tenants should pay only for the power consumed multiplied by a PUE factor to account for additional power for cooling. Look for a provider with hybrid cooling technologies (i.e. utilise natural cooling such as free outside air) with ample cooling redundancy.

[easy-tweet tweet=”Balancing workload, continuity and disaster recovery is important for sustainability” hashtags=”DR, DataCentre”]

DCIM LITERACY

Because data centres have historically been purpose-built facilities with lots of complex technology, managing those technologies has been problematic. Often, devices would have management software, but individual software systems may not be compatible/integrated. Ask providers about their DCIM competencies. Are all their systems connected? Are all sensors connected to and monitored by software? Can they generate dashboards/reports on the fly and zoom to floor, cabinet and rack levels? Do they have an end-to-end asset management capabilities to let you manage your assets from โ€˜dock to decomโ€™? Do they have integration into other ITSM systems to tap into the capabilities you need?

WORKLOAD AND WORKFLOW MANAGEMENT

After the physical elements, focus on how the workload is delivered and managed. There are several key considerations around the type of data or applications an organisation is trying to deliver. Cloud and big data will continue to change how organisations distribute data, especially among multiple locations.

The IT landscape is continuously shaped by shifts such as โ€˜data-on-demandโ€™, Bring Your Device and the Internet of Things, so ensure providers are is in-the-know and capable of keeping up.

Balancing workload, continuity and disaster recovery is important for sustainability. The distance the data has to travel, and the amount of bandwidth provided can mean the difference between a great user experience and deployment failure. Workflow management systems prioritise the delivery of certain data and infrastructure components. Furthermore, it helps determine what needs to have higher uptime requirement so, in times of bottleneck, organisations can access the most important information first.

AND FINALLY

These steps are the start of a journey. Issues such as the physical security situation are another matter entirely, and an array of security certifications such as PCI DSS 2.0, SSAE 16 and ISO 27002 need to be understood as needed for the service required.

Of crucial importance, but worth an article or two in its right is an investigation into the Service Level Agreement. SLAs are the cornerstone of a good relationship and avert conflict. When selecting the right colocation provider, creating or having a good SLA and establishing clear lines of demarcation are crucial.

Migrating data centres into colocation are mission critical. Itโ€™s key to consider the major DCIM factors outlined, as not all colocation providers are the same.

+ posts

Biography: As a co-founder, Robert co-developed Nlyte, the ย industry-leading data centre infrastructure management (DCIM) suite to address a need for solutions to help data centres make informed decisions for the planning and effective management of their assets and infrastructure. As CTO, Robert is responsible for developing new technology and for setting and driving product strategy.
ย 
Prior to Nlyte, Robert was Data Center Manager for UBS Investment Bank and has worked for Equitrac UK, developing cost charge-back solutions for legal firms.
ย 
Nlyte Software was formed in 2004 by data centre managers driven to find a better way to manage the complexity of data centre resources, assets and staff in order to reduce costs and mitigate risk. It helps companies manage their physical infrastructure in their own data centres or collocation facilities. With over 300 customers, Nlyte boasts three of the top five of the worldโ€™s most valuable brands; six of the top ten technology companies, and notable numbers of the largest US and global banks, healthcare companies, and numerous government agencies.

Unlocking Cloud Secrets and How to Stay Ahead in Tech with James Moore

Newsletter

Related articles

Welcome to More Productive, AI-powered Working Lives

According to content services expert Dr. John Bates, AI...

Cloud Security Challenges in the Modern Era

Organisations already have to store files and data in...

Why I welcome AI software development

Today, I will be taking you on a journey,...

A Practical Guide to the EU AI Act

Disclaimer: This article is opinion-based; please seek legal advice...

Building a Smart City

If you ask me how I picture the future,...