Transcription

Data Center CoolingBest PracticesBy Peter SaccoExperts for Your Always Available Data CenterWhite Paper #21 2007 PTS Data Center Solutions. All rights reserved. No part of this publication may be used, reproduced, photocopied, transmitted, orRev 2007-0stored in any retrieval system of any nature, without the written permission of the copyright owner. www.PTSdcs.com

EXECUTIVE SUMMARYMaintaining a suitable environment for information technologies is arguably the number oneproblem facing data center and computer room managers today. Dramatic and unpredictablecritical load growth has levied a heavy burden on the cooling infrastructure of these facilitiesmaking intelligent, efficient design crucial to maintaining an always available data center. Thepurpose of this white paper is to establish a best practices guideline for cooling systems designfor data centers, computer rooms, and other mission critical technical spaces.2 2007 PTS Data Center Solutions. All rights reserved. No part of this publication may be used, reproduced, photocopied, transmitted, orRev 2007-0stored in any retrieval system of any nature, without the written permission of the copyright owner. www.PTSdcs.com

Table of ContentsTable of Contents.3Cooling Systems Design Goals.4Adaptability.4Availability .4Maintainability .4Manageability .4Cost .4Determine the Critical Load and Heat Load .5Establish Power Requirements on a per RLU Basis .5Determine the CFM Requirements for each RLU.5Perform Computational Fluid Dynamic (CFD) Modeling.6Determine the Room Power Distribution Strategy .6Power Cable Pathway Considerations .7Power Distribution Impact on Cooling .7Determine the Room & Cabinet Data Cabling Distribution Impact .7Data Cable Distribution Impact on Cooling .8Establish a Cooling Zone Strategy.8High-Density Cooling.8Zone Cooling .9Determine the Cooling Methodology .9Direct Expansion (DX) Systems .9Chilled Water Systems .9Heat Rejection.10Cooling Redundancy .10Precision Air Conditioners Versus Comfort Cool Air Conditioners .10Sensible Heat Ratio (SHR).10Humidity .11Determine the Cooling Delivery Methodology .11Determine the Floor Plan .12Establish Cooling Performance Monitoring .12Approaches Not to Take.13Reducing CRAC Temperatures.13Roof-mounted cabinet fans .13Isolating high-density equipment.14Conclusion .14About PTS Data Center Solutions .153 2007 PTS Data Center Solutions. All rights reserved. No part of this publication may be used, reproduced, photocopied, transmitted, orRev 2007-0stored in any retrieval system of any nature, without the written permission of the copyright owner. www.PTSdcs.com

Cooling Systems Design GoalsTo establish an effective cooling solution for any new or upgraded data center or computer room, it isessential to establish a set of design goals. Experience suggests these goals can be categorized asfollows:Adaptability1. Plan for increasing critical load power densities2. Utilize standard, modular cooling system components to speed changes3. Allow for increasing cooling capacity without load impact4. Provide for cooling distribution improvements without load impactAvailability1. Minimize the possibility for human error by using modular components2. Provide as much cooling system redundancy as budget will allow3. Eliminate air mixing by providing supply (cold air) and return (hot air) separation to maximizecooling efficiency4. Eliminate bypass air flow to maximize effective cooling capacity5. Minimize the possibility of fluid leaks within the computer room area as well as deploy adetection system6. Minimize vertical temperature gradients at the inlet of critical equipment7. Control humidity to avoid static electricity build up and mold growthMaintainability1. Deploy the simplest effective solution to minimize the technical expertise needed to assess,operate, and service the system2. Utilize standard, modular cooling system components to improve serviceability3. Assure system can be serviced under a single service contractManageability1. Provide accurate and concise cooling performance data in the format of the overallmanagement platform2. Provide local and remote system monitoring access capabilitiesCost1. Optimize capital investment by matching the cooling requirements with the installed redundantcapacity and plan for scalability2. Simplify the ease of deployment to reduce unrecoverable labor costs3. Utilize standard, modular, cooling system components to lower service contract costs4. Provide redundant cooling capacity and air distribution in the smallest feasible footprint4 2007 PTS Data Center Solutions. All rights reserved. No part of this publication may be used, reproduced, photocopied, transmitted, orRev 2007-0stored in any retrieval system of any nature, without the written permission of the copyright owner. www.PTSdcs.com

Determine the Critical Load and Heat LoadDetermining the critical heat load starts with the identification of the equipment to be deployed withinthe space. However, this is only part of the entire heat load of the environment. Additionally, thelighting, people, and heat conducted from the surrounding spaces will also contribute to the overall heatload. As a very general rule-of-thumb, consider no less than 1-ton (12,000 BTU/Hr / 3,516 watts) per400 square-feet of IT equipment floor space.The equipment heat load can be obtained by identifying the current requirements for each piece ofequipment and multiplying it by the operating voltage (for all single phase equipment). The numberderived is the maximum draw or nameplate rating of the equipment. In reality, the equipment will onlydraw between 40% and 60% of its nameplate rating in a steady-state operating condition. For thisreason, solely utilizing the nameplate rating will yield an over inflated load requirement. Designing thecooling system to these parameters will be cost prohibitive. An effort is underway for manufacturers toprovide typical load rating of all pieces of equipment to simplify power and cooling design.Often, the equipment that will occupy a space has not been determined prior to the commencement ofcooling systems design. In this case, the experience of the designer is vital. PTS maintains an expertknowledge of the typical load profile for various application and equipment deployments. For thisreason, as well as consideration of future growth factors it may be easier to define the load in terms ofan anticipated standard for a given area. The old standard used to be a watts-per-square foot definition.However, that method has proven to be too vague to be effective.Establish Power Requirements on a per RLU BasisPower density is best defined in terms of rack or cabinet foot print area since all manufacturers producecabinets of generally the same size. This area can be described as a rack location unit (RLU), toborrow Rob Snevely’s, of Sun Microsystems, description.The standard RLU width is usually based on a twenty-four (24) inch standard. The depth can varybetween thirty-five (35) and forty-two (42) inches. Additionally, the height can vary between 42U and47U of rack space, which equates to a height of approximately seventy-nine (79) and eighty-nine (89)inches, respectively.A definite trend is that RLU power densities have increased every year. 1,500-watts per RLU for a typical late-90’s server deployment 4,000-watts per RLU for a typical easrly-2000’s server deployment 5,000-8,000-watts per RLU for a 1U server deployment Upwards of 30,000-watts per RLU for a blade server deploymentIn 2002, American Power Conversion (APC) published data that approximately 90% of all new productserver environments were being deployed at rack densities between 1,500 and 4,000 watts.The reality is that a computer room usually deploys a mix of varying RLU power densities throughout itsoverall area.The trick is to provide predictable cooling for these varying RLU densities by using the average RLUdensity as a basis of the design while at the same time providing adequate room cooling for the peakRLU and non-RLU loads.Determine the CFM Requirements for each RLUEffective cooling is accomplished by providing both the proper temperature and an adequatequantity of air to the load.5 2007 PTS Data Center Solutions. All rights reserved. No part of this publication may be used, reproduced, photocopied, transmitted, orRev 2007-0stored in any retrieval system of any nature, without the written permission of the copyright owner. www.PTSdcs.com

As temperature goes, the American Society of Heating, Refrigerating, and Air-Conditioning Engineers(ASHRAE) standard is to deliver air between the temperatures of 68 F and 75 F to the inlet of the ITinfrastructure. Although electronics performs better at colder temperatures it is not wise to deliver lowerair temperatures due to the threat of reaching the condensate point on equipment surfaces.Regarding air volume, a load component requires 160 cubic feet per minute (CFM) per 1 kW ofelectrical load. Therefore, a 5,000-watt 1U server cabinet requires 800 CFM.Most perforated raised floor tiles are available in 25%-open and 56%-open versions. Typically, 25%open tiles should be used predominantly and 56%-open tiles used sparingly and for specific instances.Additionally, damped, adjustable-flow tiles are also available, but are not recommended due to thecomplexity involved in balancing variable air delivery across a raised floor.Typical raised floor cooling capacities can be determined per the following table: 300 CFM1,875 wattsCan be achieved with a standard raised floor cooling design 700 CFM4,375 wattsCan be achieved with properly laid out, low leakage, raised floor design 1,200 CFM7,500 wattsCan only be achieved within specific areas of a well planned raisedfloorTable 1Raised floor cooling effectiveness can be further enhanced by ducting IT-equipment return air directlyto the CRAC equipment such that uniform air delivery across the raised floor is not critical.Perform Computational Fluid Dynamic (CFD) ModelingCFD modeling can be performed for the under floor air area as well as the area above the floor. CFDmodeling the airflow