Data center infrastructure management

Data center infrastructure management (DCIM) is a category of solutions which were created to extend the traditional data center management function to include all of the physical assets and resources found in the Facilities and IT domains. DCIM deployments over time will integrate information technology (IT) and facility management disciplines to centralize monitoring, management and intelligent capacity planning of a data center's critical systems. Since DCIM is a broadly used term which covers a wide range of data center management values, each deployment will include a subset of the full DCIM value needed and expected over time.

Full DCIM deployments will involve specialized software, hardware and sensors. With more than 75 vendors in 2014 self-identifying their offerings to be part of the DCIM market segment, the rapid evolution of the DCIM category is leading to the creation of many associated data center performance management and measurement metrics, including industry standard metrics like PUE, CUE and DCeP – Data Center Energy Productivity as well as vendor-driven metrics such as PAR4 - Server Power Usage and DCPM – Data Center Predictive Modeling with the intention of providing increasingly cost-effective planning and operations support for certain aspects of the data center and it's contained devices.

Since its identification as a missing component for optimized data center management, the broad DCIM category has been flooded with a wide range of point-solutions and hardware-vendor offerings intended to address this void. The analyst firm Gartner Research has started using a set of terms to try and segment this population of DCIM vendors. DCIM Suite vendors number around a dozen in 2012, and consist of software offering which are comprehensive and integrated in nature. These suites deal with lifecycle asset management, and touch upon both IT and Facilities. A second term, DCIM Specialists is used to describe the rest of the DCIM vendors. In general, these specialists can be viewed as enhancements to the DCIM Suite offerings and in most cases can also be used as viable stand-lone solution to a specific set of data center management needs.

The large framework providers are re-tooling their own wares and creating DCIM alliances and partnerships with various other DCIM vendors to complete their own management picture. The inefficiencies seen previously by having limited visibility and control at the physical layer of the data center is simply too costly for end-users and vendors alike in the energy-conscious world we live in. These large framework providers include Hewlett-Packard, BMC, CA and IBM/Tivoli and have promised DCIM will be part of their overall management structure and are scrambling to do so through these in-house and partnership efforts.

While the physical layer of the data center has historically been viewed as a hardware exercise, there are a number of DCIM Suite and DCIM Specialist SOFTWARE vendors who offer varied DCIM capabilities including one or more of the following; Capacity Planning, high-fidelity visualization, Real-Time Monitoring, Cable/Connectivity management, Environmental/Energy sensors, business analytics (including financial modeling), Process/Change Management and integration well with various types of external management systems and data sources.

In 2011 some predicted data center management domains would converge across the logical and physical layers. This type of converged management environment will allow enterprises to use fewer resources, eliminate stranded capacity, and manage the coordinated operations of these otherwise independent components.

Driving factors
According to an IT analyst at Gartner and presented in December 2013, "By 2017, DCIM tools will be significantly deployed in over 60% of larger data centers (> 3,000 sq ft) in North America." Hence, DCIM can be viewed as a high growth adoption since less than 10% percent of the same market had adopted anything in this category by 2012. There are several trends driving the adoption of DCIM. These drivers include:
 * Capacity Planning
 * Asset Lifecycle Management
 * Uptime and Availability enabler
 * More efficient power and support for heat density cooling
 * Data center consolidation and Tech Refresh
 * Virtualization and cloud computing
 * Increased reliance on critical IT systems
 * Energy efficiency or Green IT initiatives

Features
At a high level, DCIM can be used for many purposes. DCIM can support data center availability and reliability requirements, it can identify and eliminate sources of risk to increase availability of critical IT systems, it can be used to identify interdependencies between facility and IT infrastructures to alert the facility manager to gaps in system redundancy, and it can assist in modeling the costs structures of building and maintaining the huge accumulation of assets which form the data center, over long periods of time.

Worth noting is a tiny bit of segmentation is beginning to occur now (2013). The roster of DCIM suppliers is becoming grouped (in many public forums) into a minimum of two buckets, or segments in an attempt to reduce the customer confusion when researching DCIM solutions. The first bucket is the integrated software suites, where a comprehensive set of lifecycle asset management features are brought together and share a common view of the data center. Integrated repositories, reporting and connectivity are all expected to exist within these suites. Suites share a common look and feel and leverage all of the underlying asset knowledge where appropriate. A single source of truth exists across the entire suite for any given attribute.

The second group of DCIM suppliers includes all of the remaining 100+ vendors. These vendors enhance the DCIM suites and can exist as stand-alone solutions as well. These solutions are also referred to as 'specialists' or 'DCIM-ready' components. These include sensor systems, power management solutions, analytics packages, and monitoring. One of more of these enhancement solutions will likely be deployed or coupled with a single selected DCIM suite. There will be additional segmentation as vendors self-align their values to customer needs.

One popular initiative which certain DCIM solution can address is the reduction of energy usage and energy efficiency. In these cases, DCIM solutions enables data center managers to measure energy use, enabling safe operation at higher densities. According to Gartner Research, DCIM can lead to energy savings that reduce a data center's total operating expenses by up to 20 percent. In addition to measuring energy use, other DCIM components such as CFD can be used to maximize the use of airflow and eliminate stranded resources such as space, which further drives down infrastructure costs.

DCIM software is used to benchmark current power consumption through real-time feeds and equipment ratings, then model the effects of "green" initiatives on the data center's power usage effectiveness (PUE) and data center infrastructure efficiency before committing resources to an implementation.

On the IT side of DCIM, certain vendor implementations of DCIM Suites will allow optimal server placement with regard to power, cooling and space requirements and there is a US Patent (7,765,286) which provides a discussion about this type of intelligent placement based upon one or more existing data center conditions.

Evolution of tools
Traditional approaches to resource provisioning and service requests have proven to be ill suited for virtualization and cloud computing. The manual handoffs between technology teams were also highly inefficient and poorly documented. This initially led to poor consumption of system resources and an IT staff that spent a lot of time on activities that provided little business value. In order to efficiently manage data centers and cloud computing environments, IT teams need to standardize and automate virtual and physical resource provisioning activities and develop better insight into real-time resource performance and consumption.

Data center monitoring systems were initially developed to track equipment availability and to manage alarms. While these systems evolved to provide insight into the performance of equipment by capturing real-time data and organizing it into a proprietary user interface, they have lacked the functionality necessary to effectively monitor and make adjustments to interdependent systems across the physical infrastructure to address changing business and technology needs.

More sophisticated integrated monitoring and management tools were later developed to connect this equipment and provide a holistic view of the facility's data center infrastructure. In addition to enabling the comprehensive real-time monitoring, these tools were equipped with additional modeling and management functionality to facilitate long-term capacity planning; dynamic optimization of critical systems performance and efficiency; and efficient asset utilization.

In response to the rapid growth of business-critical IT applications, server virtualization became a popular method for increasing a data center's IT application capacity without making additional investments in physical infrastructure. Server virtualization also enabled rapid provisioning cycles, as multiple applications could be supported by a single provisioned server.

Modern data centers are challenged with disconnects between the facility and IT infrastructure architectures and processes. These challenges have become more critical as virtualization creates a dynamic environment within a static environment, where rapid changes in compute load translate to increased power consumption and heat dispersal. If unanticipated, rapid increases in heat densities can place additional stress on the data center's physical infrastructure, resulting in a lack of efficiency, as well as an increased risk for overloading and outages. In addition to increasing risks to availability, inefficient allocation of virtualized applications can increase power consumption and concentrate heat densities, causing unanticipated "hot spots" in server racks and areas. These intrinsic risks, as well as the aforementioned drivers, have resulted in an increase in market demand for integrated monitoring and management solutions capable of "bridging the gap between IT and facilities" systems.

In 2010, analyst firm Gartner. Inc. issued a report on the state of DCIM implementations and speculated on future evolutions of the DCIM approach. According to the report, widespread adoption of DCIM over time will lead to the development of "intelligent capacity planning" solutions that support synchronized monitoring and management of both physical and virtual infrastructures.

Intelligent capacity planning will enable the aggregation and correlation of real-time data from heterogeneous infrastructures to provide data center managers with a common repository of performance and resource utilization information. It also will enable data center managers to automate the management of IT applications based on server capacity—as well as conditions within a data center's physical infrastructure—optimizing the performance, reliability and efficiency of the entire data center infrastructure.