Print Page   |   Contact Us   |   Report Abuse   |   Sign In   |   Join the DCA
DCA Member Search
Event Calendar

10/03/2017 » 28/12/2017
Training & Education Courses in 2017

05/04/2017 » 31/12/2017
Site Access Control & Security

09/05/2017 » 10/05/2017
EURECA Event, Barcelona

18/05/2017 » 19/05/2017
DCS Awards 2017

23/05/2017 » 24/05/2017
Edie Live

Top Contributors

DCA Member's Blog
Blog Home All Blogs
Please post data centre industry comments, experiences, ideas, questions, queries and goings-on here - to stay informed with updates, ensure you are subscribed by clicking the "subscribe" button.

 

Search all posts for:   

 

Top tags: Date Centre  Datacentre  efficiency  EMKA UK  central  Cooling  data  data centre  London  cloud  data centre security  Location  pue  connectivity  EMKA  energy-efficient computing  swinghandles  LDeX Group  air management  anti contamination  BioLock  data centre cleaning  data centre security;  disaster recovery  EU CODE of Conduct  infrastructure  planning  power  university of east london  building 

Load testing to ASHRAE TC9.9

Posted By Paul Smethurst, Hillstone Products, 28 October 2015

ASHRAE TC9.9 promotes operating IT equipment at higher ambient temperatures to reduce the running costs of datacentre cooling systems.

In order to incorporate these recommendations datacentre designers and operators need to be able to replicate maximum expected environmental conditions during IST commissioning programs. 

TC9.9 defines two temperature criteria which must be part of any IST commissioning program. 

-       The first criteria being the cold aisle operating temperature range of 18°C - 27°C  /  64°F - 80°F

-       The second criteria defines the maximum operating temperature of 40°C -45°C / 104°F - 113°F for commercial grade IT equipment

Often commissioning managers complain of load banks cutting out due to temperature overload thermostats and subsequent difficulties of establishing a consistent temperature in the datahall when trying to meet the temperature range of testing to TC9.9. 

These load banks will therefore prevent the IST from demonstrating failure of the cooling system and the time period for the temperature rise to reach the maximum operating temperature for commercial grade IT equipment.

Hillstone’s HAC230-6RM server simulator load banks are designed to operate in accordance to TC9.9 over a wide temperature range to prove the successful operation of the datacentre during stress testing of the HVAC system.   

This includes:

-       operating the cold aisle at  18°C - 27°C  /  64°F - 80°F

-       determining the runtime to 40-45°C / 104°F - 113°F

-       creating a delta T range from 6°C to 20°C  / 43°F - 64°F

 

Hillstone datacentre services can supply the 6RM from its UK & Middle East rental depots

 

The Hillstone load bank and IST package will allow IT managers to build SOP and staff training programs the for mission critical equipment

 

Contact Hillstone Datacentre Services

www.hillstone.co.uk

sales@hillstone.co.uk

Tel +44 161 763 3100

 

Tags:  Cooling  power 

Share |
PermalinkComments (0)
 

Air-side free-cooling: direct or indirect systems and their impact on PUE

Posted By Robert Tozer, Operational Intelligence Limited, 01 October 2015

There is a general perception that direct air systems are more cost efficient and hence the default option. However, providing the design incorporates good air segregation, recommended ASHRAE equipment environmental conditions and adiabatic cooling of the outdoor air, for most cities warmer than London, the indirect air systems are considerably more efficient than the direct air systems. This is because many more hours of free cooling can be achieved with adiabatic cooling without affecting the indoor conditions. Furthermore, zero-refrigeration in most of the world is possible with this solution. For cooler climates, direct systems are only marginally more efficient.

Often when data centre free cooling is discussed, people assume this means direct fresh air cooling. However, in climates warmer than London, indirect air systems are more efficient than direct air systems and can allow refrigeration to be eliminated and considerably reduce the electrical plant sizing requirements. Use of adiabatic evaporative cooling on the outdoor airstream allows free cooling to be achieved for many more hours in the year when there are hot, dry conditions. Further detail on the application for free cooling in data centres is available in our technical papers.

 

Tags:  air management  Cooling  Date Centre  Location  London  pue 

Share |
PermalinkComments (0)
 

Containment Systems: a disappointing outcome!

Posted By Robert Tozer, Operational Intelligence Limited, 17 September 2015

I have visited too many sites where I am assured that air management is well under control because an air containment system has been installed, only to find disappointing results in many cases. The objective of containment is to segregate the hot and cold air streams, to minimise recirculation (exhaust IT equipment hot air re-entering intake) and bypass (cooled CRAH air not making it to IT equipment and returning directly to the cooling units). There are two fundamental issues: firstly, even with a perfect containment system, the amount of air supplied needs to be controlled to satisfy all IT equipment requirements, and secondly, containment is only part of the segregation solution.

There is a very simple way to check if there is enough cold air being supplied.  In the case of cold aisle containment, open the cold aisle door slightly and verify the air flow direction with a sheet of paper. If air is coming out of the cold aisle, there is a slight oversupply of air to the cold aisle (which is fine), however, in many cases hot air is entering the cold aisle, which means that there is insufficient air supplied, making recirculation inside the cold aisle inevitable. This can be due to incorrect type and number of floor tiles or simply insufficient air volume from the CRAHs. There are some very simple and rapid methods / metrics to diagnose air management based on temperatures, etc. such as Af (Availability of flow) which is the ratio of CRAH air volume to IT equipment air volume. Normally a slight oversupply of air (with a small amount of bypass) is better than undersupply (which causes recirculation). A large oversupply of air is an energy opportunity, whereas a large undersupply of air will inevitably lead to considerable hot spots.

The next concern is the quality of segregation between hot and cold air streams. The metric defined as Air Segregation Efficiency = 100% (ideal) when there is zero bypass and zero recirculation. The important concept here is that we are trying to create a physical barrier between the cold and hot air streams, for which we use a containment system. Most containment systems (excluding butchers curtains) are very hermetic. The issue is regarding the other segregation areas which are not part to the containment system, such as the raised floor where you can have unsealed cable cut-outs and floor grilles in the hot aisles or the front of the rack where there is a lack of blanking panels between IT equipment and gaps at the sides.

Whilst I am all for containment, the reasons why it is being installed need to be fully understood and monitored in order for its objectives to be met.

Tags:  Cooling  Datacentre  Date Centre  efficiency  energy-efficient computing  ICT 

Share |
PermalinkComments (0)
 

Airedale International announces impending production launch of AireFlow™

Posted By Airedale International Air Conditioning, 07 November 2014
https://www.youtube.com/watch?v=J0PRuXclP5M

Airedale International announces impending production launch of AireFlow™ indirect fresh air free-cooling adiabatic air handling unit (AHU)


Leading British manufacturer of precision air conditioning systems, chillers, IT and comfort cooling solutions, Airedale International, will shortly be launching its production range of indirect fresh air free-cooling adiabatic air handling units (AHUs).

The AireFlow™ AHU is the result of close collaboration between the product development and engineering teams of both Airedale and UK market-leading manufacturer of custom-built air handling units, Barkell Ltd which became part of the Airedale group earlier in 2014.

With production launch set for autumn 2014, the AireFlow™ offers huge free-cooling potential and, being an indirect system, eliminates the risk of contaminated air entering the data centre. The use of fresh air as the predominant cooling source significantly reduces operational costs for users. In contrast with direct air handling units, indirect cooling solutions also reduce the dependency on back-up mechanical cooling required to prevent contaminated ambient air permeating the data centre. This ensures there is no requirement for internal air conditioning units, therefore maximising IT footprint.

The AireFlow™ will be available in five footprints between 100 and 440kW, each with two separate case sizes depending on whether roof or wall mounted connection is required.

High efficiency, electronically commutated (EC) centrifugal backward-curved fans draw return air from the data centre through the heat exchanger. Cooler air from the outside ambient is drawn through a separate air path within the heat exchanger, also by EC plug fans. This temperature difference drives heat exchange, with the supply temperature being managed through modulation of the ambient air flow rate. EC fan technology delivers improved efficiency, full modulation and reduced power consumption compared with AC fan equivalents, particularly at reduced running speeds.

At any point in the year, as climatic conditions dictate, moisture is added to the warm outdoor air which has the effect of lowering the dry bulb temperature. A typical UK peak summer day for example may have a dry bulb of 35°C with a wet bulb temperature of 21°C. By fully saturating the air, the dry bulb temperature can be reduced to 21°C. This lower air temperature is then used as a cooling medium and, based on London, UK ambient temperatures, could achieve ASHRAE recommended conditions using 100% free-cooling.

In more challenging environments, an optional mechanical cooling module will ‘top-up’ the cooling capacity with a partial DX supplementary cooling section.
An optional integrated fresh air inlet unit provides added installation benefits and reduced footprint compared with other makes of air handling unit in addition to maintaining room pressure and air quality. Air flow and pressure monitoring also allows filter and fan performance to be managed.

Other features of the AireFlow™ include: G3 filtration (return air), G4 filtration (ambient air intake), F7 filtration (fresh air inlet), optional contaminant filtration - NO₂, SO₂, H₂S (fresh air inlet), N+1 redundancy on EC fans and a highly intuitive touchscreen colour user display.

A fully working demonstration unit will be available on Airedale’s Stand No 312 at this year’s DatacenterDynamics, ExCel London ICC, 19 & 20 November 2014.

Watch the following cutting-edge animation that highlights the different operational modes of the AireFlow™ in varying ambient temperatures within a data centre application in the video attached

Tags:  air management  Cooling  Date Centre  efficiency 

Share |
PermalinkComments (0)
 

A guide to data centre metrics and standards for start-ups and SMBs

Posted By Anne-Marie Lavelle, London Data Exchange, 27 March 2014
Updated: 27 March 2014

A guide to data centre metrics and standards for start-ups and SMBs

Having made that choice to co-locate your organisation’s servers and infrastructure to a trusted data centre provider, companies need to be able to understand the key metrics and standards which they should use to evaluate and benchmark each data centre operator against. With so many terms to get to grips with and understand, we felt it necessary to address the most prevalent ones for data centres.

Green Grid has developed a series of metrics to encourage greater energy efficiency within the data centre. Here are the top seven which we think you’ll find most useful.

PUE: The most common metric used to show how efficiently data centres are using their energy would have to be Power Usage Effectiveness. Essentially, it’s a ratio of how much energy consumption is it going to take to run a data centre’s IT and servers. This would incorporate things like UPS systems, cooling systems, chillers, HVAC for the computer room, air handlers and data centre lighting for instance vs how much energy is going to run the overall data centre which would be taking into account monitor usage, workstations, switches, the list goes on.

Ideally a data centre’s PUE would be 1.0, which means 100% of energy is used by the computing devices in the data centre – and not on things like lighting, cooling or workstations. LDeX for instance uses below 1.35 which means that for every watt of energy used by the servers, .30 of a watt is being used for data centre cooling and lighting making very little of its energy being used for cooling and power conversion.

CUE: Carbon Usage Effectiveness also developed by The Green Grid complements PUE and looks at the carbon emissions associated with operating a data centre. To understand it better you look at the total carbon emissions due to the energy consumption of the data centre and divide it by the energy consumption of the data centre’s servers and IT equipment. The metric is expressed in kilograms of carbon dioxide (kgCO2eq) per kilowatt-hour (kWh), and if a data centre is 100-percent powered by clean energy, it will have a CUE of zero. It provides a great way in determining ways to improve a data centre’s sustainability, how data centre operators are improving designs and processes over time. LDeX is run on 100% renewable electricity from Scottish Power.

WUE: Water Usage Effectiveness simply calculates how well data centres are using within its facilities. The WUE is a ratio of the annual water usage to how much energy is being consumed by the IT equipment and servers, and is expressed in litres/kilowatt-hour (L/kWh). Like CUE, the ideal value of WUE is zero, for no water was used to operate the data centre. LDeX does not operate chilled water cooling meaning that we do not use water to run our data centre facility.

Power SLAs: Service Level Agreements is the compensation offered in the unlikely event that power provided by the data centre operator to a client as part of an agreement is lost and service is interrupted affecting your company’s business. The last thing your business wants is to have people being unable to access your company’s website and if power gets cut out from your rack for some reason, make sure you have measures in place.

Data centres refer to the Uptime Institute for guidance with regards to meeting standards for any downtime. The difference between 99.671%, 99.741%, 99.982%, and 99.995%, while seemingly nominal, could be significant depending on the application. Whilst no down-time is ideal, the tier system allows the below durations for services to be unavailable within one year (525,600 minutes):

  • Tier 1 (99.671%) status would allow 1729.224 minutes
  • Tier 2 (99.741%) status would allow 1361.304 minutes
  • Tier 3 (99.982%) status would allow 94.608 minutes
  • Tier 4 (99.995%) status would allow 26.28 minutes

LDeX has infrastructure resilience rated at Tier 3 status offering customers peace of mind that in the unlikely event of an outage–and therefore protecting your business. We like to operate closed control cooling systems in our facilities enabling us to operate tight environmental parameter SLA’s. LDeX operate SLA Cold Aisle Temperature parameters at 23degreesC +/- 3 degreesC and RH (Relative Humidity) 35% – 60%.

Some data centres run fresh air cooling systems which make it hard to regulate RH and quite often their RH parameters are 205 – 80% and beyond. This can lead to increased humidity in the data hall and has on occasion resulted in rust on server components or a low RH can produce static electricity within the data hall. Make sure you look into this and ask about it.

Understand the ISO standards that matter to your business

ISO 50001 – Energy management

Using energy efficiently helps organisations save money as well as helping to conserve resources and tackle climate change. ISO 50001 supports organisations in all sectors to use energy more efficiently, through the development of an energy management system (EnMS).

ISO 50001:2011 provides a framework of requirements for organizations to:

  • Develop a policy for more efficient use of energy
  • Fix targets and objectives to meet the policy
  • Use data to better understand and make decisions about energy use
  • Measure the results
  • Review how well the policy works, and
  • Continually improve energy management

ISO 27001 – Information Security Management

Keeping your company’s intellectual property should be a top priority for your business and ensuring that your data centre provider offers this sort of resilience is imperative. The ISO 27000 family of standards helps organizations keep information assets secure.

Using this will help your organization manage the security of assets such as financial information, intellectual property, employee details or information entrusted to you by third parties.

ISO/IEC 27001 is the best-known standard in the family providing requirements for an information security management system (ISMS). An ISMS is a systematic approach to managing sensitive company information so that it remains secure. It includes people, processes and IT systems by applying a risk management process.

It can help small, medium and large businesses in any sector keep information assets secure.

Like other ISO management system standards, certification to ISO/IEC 27001 is possible but not obligatory. Some organizations choose to implement the standard in order to benefit from the best practice it contains while others decide they also want to get certified to reassure customers and clients that its recommendations have been followed. ISO does not perform certification.

PCI DSS – Banks and businesses alike conduct a lot of transactions over the internet. With this in mind, the PCI Security Standards Council (SSC) developed a set of international security standards to ensure that service providers and merchants have payment protection whether from a debit, credit or company purchasing card. As of 1st January 2015, PCI DSS 2.0 will become mandatory. This will be broken down into 12 requirements ranging from vulnerability assessments to encrypting data. Make sure to ask if your data centre operator has this standard.

With the increased stakeholder scrutiny that has been placed on data centres, steps need to be put in place to make sure that the data centre operator that you are looking at choosing is aligning its strategy not only to some of these metrics and standards mention, but to other security, environmental governmental regulations that have been brought in.

Working for a successful data centre and network services provider like LDeX has enabled me as a relative newbie to the data centre industry, to get to grips with these terms to facilitate client understanding regarding where LDeX sits in comparison with our competitors.

Anne-Marie Lavelle, Group Marketing Executive at LDeX Group

Tags:  connectivity  Cooling  CUE  data centre  Datacentre  Date Centre  efficiency  ISO standards  operational best practice  PCI DSS  PUE  WUE 

Share |
PermalinkComments (0)
 

Business Technology: Ecocooling/Safehosts London

Posted By Dave Cartwright, Safehosts, 15 October 2012

Google is considered to be world champions of energy efficiency when it comes to their data centres, with a reading of about 1.2 on the power usage effectiveness (PUE) index, where a very good data centre typically runs at around 1.4. However, this record has been beaten by a system designed for Safehosts in London that – thanks to a new and efficient IT cooling system for their 5MW facility – has shown evidence of a 1.06 PUE.

http://biztechreport.co.uk/2012/10/champions-of-it-cooling/

Do get in touch if you're interested in seeing how this was achieved first hand.

Tags:  Cooling  Datacentre  Ecocooling  Safehosts 

Share |
PermalinkComments (1)
 

Own or Out-source, Build or Buy?

Posted By Steve Hone, 01 August 2012

These are questions I am often asked, I came across this article the other day on the subject and wanted to share it with you, Posted originally by Nicholas Greene a year or so ago but still very relivent today……..

For one reason or another, your enterprise organization's got some big computing needs right over the horizon. Maybe you're setting up a new consumer payment or accounts management platform. Maybe you've just developed the next best online game, and you need servers to host it. Maybe you just need some additional storage. Whatever the reason, you're gonna need a Data Center. One question remains, though- should you outsource, or build?

Constructing a Data Center's no mean task, as you well know- it's a positively herculean undertaking which brings with it overwhelming costs and an exhaustive time commitment just to construct it- never mind maintaining it after the fact. If you're going to build a Data Facility, you'd better make damned sure your business can handle it. If you don't, you'll flounder- it's simply reality.

There are a lot of things you have to consider- cost and budget, employees, time constraints…you know the drill. Today, we're going to take a closer look at the first entry on the list-the reason for setting up a facility- and use it as a springboard in determining when you should outsource, and when the management of a facility should be placed solely in your organization's hands.

Ultimately, you have three choices- it comes down to whether or not you want to outsource to a multi-purpose data vendor, construct your own purpose-built facility, or hire a contractor to custom-tailor a facility for you. Before we even get started, I'm going to say right out the door that most businesses are better off going with the first or third option.

To determine what choice is right for you, there are a few things you should consider. What does your business do? What shall you be using the facility for, and how intensive will your needs be? How important are the tasks you require the facility for? Are they key components of your business strategy, or of one leg of your corporation?

What your business does can play a considerable role in determining whether or not you'll run your own servers. Is your organization solely based in the technology sector, or is your primary area of expertise in finance. Are you a hardware or software vendor, or do you primarily sell consumer products? How large is your IT department? How well-funded are they? All of these questions should be taken into account, as they can very well help determine right out the door if your business is even capable of managing its own facility without some significant restructuring, let alone building one.

Of course, that's only the first thing you need to consider- what your organization does by no means restricts it from constructing its own centers- Facebook's a prime example of this. Of course, in their case, they have their own reasons for building their own servers- they are, after all, the world's largest and best-known social network.

As I've already stated, what you need the facility for also plays a very important role. If you are, for example, a cloud-based SAAS vendor, it should go entirely without saying that you should be building and managing your own facility. As a general rule, if you expect to turn a significant profit from your facility, or the need met by the facility comprises a key aspect of your business model, you should look at running your own- or, at the very least, get yourself a custom-built data center.

Bandwidth goes hand in hand with purpose. How many gigabytes of data is your product or service going to use? How will the costs of development and management stack up against the fees you'd be paying if you outsourced? Will you turn enough of a profit to merit setting up your own facility?

Do you foresee yourself needing to expand your facility in the future? How will you do it? Scalability's an important concern, and if your business can't afford to expand- or virtualize- outsourcing might be the answer. Size matters, folks, and the smaller your business, the more likely you are to need to contract out, rather than run your own center.

Finally, there's your staff. Is it more economical to train and hire a whole team of new employees, or simply contract out to an organization to manage things for you?

Every business is different, and not all organizations are built equal. What I've listed here, all of the information; it's little more than a guideline. Ultimately, the choice of whether or not to outsource rests entirely with you.

Tags:  building  central  comms  cooling  Date Centre  efficiency  Location  planning  PUE 

Share |
PermalinkComments (0)
 

Why Google’s Data Centre is Not Like Yours

Posted By Steve Hone, 01 August 2012

Does your company operate a data centre? It's likely you do, since any organisation that is large enough needs to have its own servers. Yet while you probably purchase from Dell or HP and throw them into a room with a whole bunch of temperature control units, Google decided long ago to rethink the concept of the data center.

They started this process with the servers themselves. They purchase all of their own components and build the servers from scratch. Why? Because the company feels that they can make a better server unit that fits their needs. Instead of a typical server, you get something that looks more like a homemade PC project.

There are tens of thousands of these custom-built severs located around the world. When you do a search or use any Google product, the company takes your IP address and routes you to their closest data centre in order to provide the highest speed (lowest latency) possible. The company has realised the correlation between speed and customer satisfaction and has therefore built enough data centres to accommodate.

The data centres are also configured differently. In order to optimise space and cooling needs, the company packs servers into shipping unit that are then individually cooled. Google's experts have determined that this is the best way to efficiently economise.

Take a look at this tour of such a facility. http://youtu.be/zRwPSFpLX8I

So why are Google's data centres so special? The answer is efficiency. The company uses a ton of power to keep these servers running and doing so at an optimal temperature. The company tries to locate these facilities near hydroelectric power because of its lower cost. It also explains why Google has such an interest in renewable energy and last year entered into a twenty year agreement to buy wind power. The company knows that its power needs are going to increase over time, and this is a way to hedge the fluctuations in energy prices over the years.

Tags:  building  central  comms  cooling  data  Date Centre  efficiency  Location  PUE 

Share |
PermalinkComments (0)
 

Nothing Worth Having Comes Easy !

Posted By Steve Hone, 01 August 2012

So your thinking about building a Data Centre are you,.... well how difficult can it be !!!, before you jump in feet first here are a few things to consider................

You dont have to scratch to far under the surface to realise there's a lot that goes into running a data centre- and the logistics can, quite frankly, be overwhelming. Paying attention to the metrics- to all the little details- can make or break a facility. If an Operator isn't careful, they can easily find themselves left in the dust by their competitors. Not surprisingly, there's also a lot that goes into establishing a data centre, as well. If you're thinking of having your business start one up, you need to ask yourselves a few questions first. If you don't, the results could be disastrous.

The Reason:

Why are you setting up a data centre? What do you have to gain from it? If you do set it up, what will you use it for? What sort of services will it provide? Is it going to primarily be used for consumer information, or for business data? What sort of a profit do you stand to make? You need to consider very, very carefully whether or not setting up a data centre is the right choice for you before you do it- and what purpose the facility will serve as it may well prove to be more cost effective to outsource to a Colocation, Hosting or MS/Cloud provider instead.

The Location:

Aside from the purpose of your facility, its location is probably the most important consideration. Will it be a domestic centre, or an international one? What sites are available? What sites best suit your needs? The planned purpose of the facility will have a marked impact on where you situate it.

Cost:

Another question you need to answer before setting up a facility is whether or not you've got the budget to do so. How much will your data centre cost? What sort of upkeep will be required to keep things running? What about power demands? You're going to need to budget everything very, very carefully before you set up your new facility- plan out how much money it'll make you, and compare that against how much it's slated to cost you.

While there are certain steps you can take to reduce the cost of running the facility- such as greener energy and hardware alternatives- you still need to be certain the new centre won't break the bank.

Employees and staffing:

What sort of outside help will you be bringing in to help set up the data centre? Are your employees properly trained to manage such a facility? Can your IT department handle the Cloud? Do you have the staff base to manage things now, or are you going to need to bring in new hires in order to handle the workload?

Space:

Space is another of your concerns- before you go about purchasing your hardware, figure out how much space you need- and then how much space you've got. If you've done your homework and found a good location, those two variables should be pretty much the same. Consider the options you have for reducing how much space your server takes up- new, compact modular server designs are just one of these options.

The Hardware and Equipment:

You're going to need to figure out where you're getting your hardware from, and what sort of hardware you'll need. Be sure to buy new- skimping on the servers is one of the worst possible things you can do. More than anything else, you'll need your hardware to be both durable and reliable. Go for those qualities first, and consider innovative solutions as secondary concerns.

Hardware's not the only thing you'll need to worry about, either. You'll need equipment aside from the servers- server racks and office supplies are one example, and if you're constructing the facility from the ground up, you'll need to consider the supplies you'll need to build it, as well.

Cooling:

How are you going to keep your servers running cool? While the traditional approach works just fine, it has a tendency to rack up a downright terrifying power bill- consider more innovative alternatives.

Network Connectivity:

This should really go without saying. What sort of network provider are you going to be making a deal with? How much bandwidth is your facility going to need? How much will it cost? Can the physical infrastructure of your facility and the surrounding area keep up? You need to make sure you've got the right provider and hardware for your network, or your clients are going to be experiencing a great deal of latency- which could ultimately cripple your centre.

The Software:

You're also going to need to iron out the details of your software. What sort of management platform are you going to use? Will you be dealing with a vendor, or developing your own in-house platform? If you're choosing a vendor, make sure you choose a good one-

Power:

Finally, how much energy will your data centre use? What will the power efficiency of the facility be? How can you improve on this efficiency? How much power should you provision for the facility? Make sure you've got this ironed out in advance- it'll cost you a very, very large sum if you end up having to reprovision.

 

 

 

 

Many thanks to Nicholas Greene

Tags:  budget  building  central  comms  cooling  data  Date Centre  Location  planning  power 

Share |
PermalinkComments (0)
 
Sign In
Sign In securely
News

Data Centre Alliance

Privacy Policy