How DCIM Improves Performance and Reduces Carbon Footprint

By Margaret Ranken, Telecommunications Analyst

As data centres have become increasingly complex, juggling efficiency and availability to meet the ever-growing demand for computing power, the spreadsheets that most data centre managers currently use are no longer adequate.

Not only are they a poor means of tracking the increasingly large numbers of assets in the data centre, but they are incapable of meeting the challenges created by virtualization, which pushes servers to their limits and creates “hotspots”. DCIM (data centre infrastructure management) tools have been developed to fill the gap, and many commentators have forecast rapid adoption: in 2012 Gartner predicted DCIM could penetrate as many as 60 percent of U.S. data centres by 2015. 451 Group’s recent study predicts DCIM sales to grow at 44% CAGR to reach $1.8bn in aggregate revenue in 2016.

However, progress so far has been slow, despite the fact that leading vendor CA Technologies claims energy savings of 30% and an 11 month payback. A Heavy Reading report recently sounded a note of caution about the implementation challenges among the larger data centres that host cloud infrastructure, where IT and building systems are often managed by separate teams.

DCIM is important because it can both increase the reliability of the infrastructure and help to reduce carbon emissions. Managing the building infrastructure separately from the IT systems and processes creates problems that have become more critical as virtualization has created a dynamic computing environment within a static building environment. Inefficient allocation of virtualized applications to servers can cause rapid changes in computing load that increase power consumption and create “hotspots”. If unanticipated, these can be too much for the data centre’s air-conditioning systems, reducing efficiency and, in turn, reducing availability due to overloading and server outages.

The DCIM approach integrates the management of the IT systems and the building systems into one seamless whole, so that the load on the servers is managed in tandem with the building systems. DCIM also helps to enforce consistent, repeatable processes and reduce the operator errors which can account for as much as 80% of system outages.

The detailed real-time monitoring that DCIM tools support improves visibility of both IT usage and physical infrastructure. Right from the design stage, DCIM uses power, cooling and network data to determine the optimum placement of new servers. During operation, equipment that is consuming large amounts of energy is identified and, where hotspots are developing, fan speeds are increased and server loadings re-configured to pre-empt problems. Integrated asset management tools mean that managers know exactly what the servers are doing and when resources such as space, power, cooling and network connectivity are likely to run out. They can analyse “what if” scenarios for server refreshes, the impact of virtualization, and moves, adds and changes and predict the effects of any faults.

When choosing a cloud service or managed service provider, you should look for one that is either already using DCIM, or plans to introduce it soon if you want to be sure that your mission-critical IT systems are being operated to the highest possible standards.
DCIM brings the additional benefit of making it simple to monitor and reduce the data centre’s carbon footprint. Using the wealth of data gathered, it becomes possible to make plans to reach energy saving goals, implement them and document the progress. DCIM tools provide the accurate energy consumption analysis and verification needed to demonstrate compliance with environmental regulations and contribute to meeting corporate social responsibility targets. When choosing a cloud service or managed service provider, you should look for one that is either already using DCIM, or plans to introduce it soon if you want to be sure that your mission-critical IT systems are being operated to the highest possible standards.

About the Author: Margaret Ranken has been spent over fifteen years analysing the telecoms industry and providing strategic advice to some of its largest players. Her reports have covered a variety of industry topics including M2M, Enterprise Mobile Data, Unified Communications, Fixed Mobile Convergence, Quad Play and High-Speed Broadband Networks. Her forecasts for Business Data Services were highly regarded in the industry. Before that, she spent a decade working on European Commission research projects in advanced communications, managing projects and writing reports on their results. She started her career as an engineering trainee with what is now British Telecom. She has an MSc in Telecommunications and Information Systems from the University of Essex and an MA in Engineering from the University of Cambridge.


+ posts

CIF Presents TWF – Professor Sue Black


Related articles

How Businesses Should Tackle Big Data Challenges

In today's data-driven landscape, Big Data plays a pivotal...

UK IP Benefits and How to Get One

There are many reasons why you may get a...

Navigating the Landscape of AI Adoption in Business

In today's rapidly evolving technological landscape, the integration of...

Three Ways to Strengthen API Security

APIs (Application Programming Interfaces) are a critical driver of...

A Comprehensive Guide To The Cloud Native Database [2024]

Databases are crucial for storing and managing important information....

Subscribe to our Newsletter