Kevin Hughes, Business Development Director, Cooling Line of Business, Asia, Pacific and Japan, Schneider Electric IT Business says the fault tolerant nature of a highly virtualised environment could raise questions about the level of redundancy required in the physical infrastructure.
Without question, IT virtualisation--the abstraction of physical network, server, and storage resources--has greatly increased the ability to utilise and scale compute power. Indeed, virtualisation has become the very technology engine behind cloud computing itself. While the benefits of this technology and service delivery model are well known, understood, and increasingly being taken advantage of, their effects on the DCPI are less understood.
There are four effects or attributes of IT virtualisation that would impact the data centre:
1. The rise of high density Higher power density is likely to result from virtualisation, at least in some racks. Areas of high density can pose cooling challenges that, if left unaddressed, could threaten the reliability of the overall data centre.
2. Reduced IT load can affect PUE After virtualisation, the data centres power usage effectiveness (PUE) is likely to worsen. This is despite the fact that the initial physical server consolidation results in lower overall energy use. If the power and cooling infrastructure is not right-sized to the new lower overall load, physical infrastructure efficiency measured as PUE will degrade.
3. Dynamic IT loads Virtualised IT loads, particularly in a highly virtualised, cloud data centre, can vary in both time and location. In order to ensure availability in such a system, its critical that rack-level power and cooling health be considered before changes are made.
4. Lower redundancy requirements are possible A highly virtualised data centre designed and operated with a high level of IT fault-tolerance may reduce the necessity for redundancy in the physical infrastructure. This effect could have a significantly positive impact on data centre planning and capital costs.
There are certain methods for cooling high-density racks to prevent
hot spots. Higher rack power densities should encourage data centre operators to examine their existing cooling infrastructure to see if it can still sufficiently cool the load. Several approaches for cooling high-density racks exist. Perhaps the most common method is to simply spread out the high-density equipment throughout the data centre floor rather than group them together. By spreading out the loads in this way, no single rack will exceed the design power density, and consequently, cooling performance is more predictable. The principle benefit of this strategy is that no new power or cooling infrastructure is required. However, this approach has several serious disadvantages including increased floor space consumption, higher cabling costs, possible reduced electrical efficiency related to uncontained air paths and the perception that half-filled racks are wasteful. That being said, this simple approach may be a viable option particularly
? When the resulting average data centre power density (KW/rack or watts/sq foot of white space) is about the same or less than it was before virtualisation.
? When managers have complete control over where physical servers are placed.
? When U space is available in existing racks to allow the spreading to happen. A more efficient approach may be to isolate higher density equipment in a separate location from lower density equipment. This high density pod would involve consolidating all high-density systems down to a single rack or row(s) of racks. Dedicated cooling air distribution (e.g., rack and row-based air conditioners) and/or air containment (e.g., hot or cold aisle containment) could then be brought to these isolated high-density pods to ensure they received the predictable cooling needed at any given time. The advantages include better space utilisation, high efficiency, and enabling maximum density per rack. Additionally, for organisations that require highdensity equipment to remain co-located together, this approach is obviously preferred.
Careful planning and on-going management is required to ensure VMs are only placed where healthy power and cooling exists. By constructing sound VM policies and by integrating DCIM software with the VM manager, this on-going management can be automated. Finally, the high level of fault tolerance that is possible with todays VM manager software makes it possible to employ a less redundant power and cooling infrastructure.
Add new comment