Do the metrics of a data centre impact upon its agility?

IT operations managers need to understand how their data centre is performing in order to make better decisions about how and where to invest in their IT infrastructure. Key to those right-sized investment decisions are to look at the various ways to measure performance; here five that should be considered as top priorities.

[easy-tweet tweet=”IT operations managers need to understand how their #datacentre is performing” user=”atchisonfrazer @comparethecloud”]

How can you merge heuristics with data centre metrics in a way that delivers the best use of space and cooling power?

It used to be that you would only use heuristics to identify network security traffic patterns. However, you can now apply heuristics to infrastructure performance issues so that you can quickly identify and act on any potential hacks.

In order to get better information, it’s best to have cross-reference visibility to NetFlow metrics with different types of KPI’s from your more conventional silos. Doing this helps you identify any contention issues as well as laying the foundations for a more intelligent investment plan. This increases productivity and makes your system far more efficient.

How can heuristic-based data centre metrics help our operations?

As the modern data centre has become more and more complex, (think conversational flows, replatforming, security, mobility, cloud compatibility) heuristics has become more and more important. This technology gives us the capability to perform back-of-envelope calculations as well as taking the risk out of human intervention. The end product is ideal, a machine-learned knowledge base.

Is it possible to properly model costs as well as operational metrics?

When it comes to managing and upgrading our system, most of us have to make do with a fixed budget. This means that we are susceptible to ‘over-provisioning’ hardware and infrastructure resources. The main cause of this is that we can’t properly see the complexities that come as part and parcel of a contemporary data centre.

[easy-tweet tweet=”We over-provision because we can’t properly see the complexities that come with a contemporary #datacentre”]

What you need is an effective infrastructure performance management tool. This will help you properly calculate your capacity and make a better-informed investment decision, which means that you won’t overspend in a bid to prevent overloading that you can’t even see.

Can financial metrics-based modelling benefit data centres?

[easy-tweet tweet=”Financial metrics allow #datacentre managers to deliver concise, easy to read metrics to people outside of #IT”]

Data centre managers can deliver core IT operational metrics to business-side managers; in a language that makes sense by continuously illustrating a realistic view into available capacity against overall efficiency, acceptable performance thresholds between running hot and wasted “headroom,” and a greater degree of granularity in terms of ROI benefits to virtualisation over maintaining legacy infrastructures.

Using financial metrics, data centre managers can deliver concise, easy to read metrics to people outside of the IT industry. This includes people such as business-side managers and other stakeholders. This is achieved through simply showing available capacity against overall efficiency as well as performance thresholds. Showing return on investment makes it easier to communicate your good performance to peers.

You will find below some of the best metrics with which to demonstrate ROI analysis:

  • TB Storage Reduction: Variable Cost / GB / Month $
  • Server Reductions: Annual OPEX / Server $
  • VM Reductions: Variable Cost / VM / Month $
  • Applications Reduction / Year $K
  • Database Reductions: Variable Cost / Database / Year $
  • Consulting / Contractor: Reduction $K
  • Revenue Improvement / Year $K
  • Blended/Gross Margin %
  • Facilities & Power / Year $K
  • Ancillary OPEX reductions / Year $K

Is it possible for data centre managers to provide a holistic operational metrics solution?

One of the best ways to visualise performance is through a high-fidelity streaming multi-threaded dashboard. These real-time operation dashboards provide easy to understand intelligence comprised of data points and their key interdependencies, which includes endpoint devices, physical infrastructure and visualised applications.

The best way to ensure that you minimise the negative impacts of a service outage is to automate your system

The best way to ensure that you minimise the negative impacts of a service outage is to automate your system. We would recommend integrating with an IT operations management platform like ServiceNow. This helps increase agility and responsiveness. However, none of this is possible without good data and visualisation. In order to predict the future, you need to understand what’s happening in the now.

Website | + posts

Atchison is a versatile, insights-driven tech-sector marketing pro with a strong track record driving global sales enablement and profit discovery through inspiring cross-platform campaigns and strategies.

Unlocking Cloud Secrets and How to Stay Ahead in Tech with James Moore

Newsletter

Related articles

6 Basic Things to Know About IP Addresses

IP addresses are like real addresses but for the...

Leveraging AI for Business Transformation and Market Adaptability

Artificial Intelligence is changing the game for businesses everywhere....

How AI is Transforming Customer Communication Management

Business communication has evolved over the years. Today, it's...

Investment Opportunities for Startups and Technologies in AI 

Although artificial intelligence developed from niche technology has become...

Four Surprising Lessons I’ve Learned Leading Tech Teams

Techies. Geeks. Boffins. Whatever your organisation calls its IT...