Print Page   |   Contact Us   |   Report Abuse   |   Sign In   |   Join the DCA
DCA Member Search
Event Calendar

07/06/2017 » 28/12/2017
Training & Education Courses in 2017

04/10/2017 » 05/10/2017
IP EXPO europe

11/10/2017 » 12/10/2017
Data Centre World Singapore

22/11/2017 » 23/11/2017
EMEX 2017

Top Contributors

DCA Member's Blog
Blog Home All Blogs
Please post data centre industry comments, experiences, ideas, questions, queries and goings-on here - to stay informed with updates, ensure you are subscribed by clicking the "subscribe" button.

 

Search all posts for:   

 

Top tags: Date Centre  Datacentre  efficiency  EMKA UK  central  Cooling  data  data centre  London  cloud  data centre security  Location  pue  swinghandles  connectivity  EMKA  energy-efficient computing  LDeX Group  air management  anti contamination  BioLock  data centre cleaning  data centre security;  disaster recovery  EU CODE of Conduct  infrastructure  planning  power  university of east london  building 

Containment Systems: a disappointing outcome!

Posted By Robert Tozer, Operational Intelligence Limited, 17 September 2015

I have visited too many sites where I am assured that air management is well under control because an air containment system has been installed, only to find disappointing results in many cases. The objective of containment is to segregate the hot and cold air streams, to minimise recirculation (exhaust IT equipment hot air re-entering intake) and bypass (cooled CRAH air not making it to IT equipment and returning directly to the cooling units). There are two fundamental issues: firstly, even with a perfect containment system, the amount of air supplied needs to be controlled to satisfy all IT equipment requirements, and secondly, containment is only part of the segregation solution.

There is a very simple way to check if there is enough cold air being supplied.  In the case of cold aisle containment, open the cold aisle door slightly and verify the air flow direction with a sheet of paper. If air is coming out of the cold aisle, there is a slight oversupply of air to the cold aisle (which is fine), however, in many cases hot air is entering the cold aisle, which means that there is insufficient air supplied, making recirculation inside the cold aisle inevitable. This can be due to incorrect type and number of floor tiles or simply insufficient air volume from the CRAHs. There are some very simple and rapid methods / metrics to diagnose air management based on temperatures, etc. such as Af (Availability of flow) which is the ratio of CRAH air volume to IT equipment air volume. Normally a slight oversupply of air (with a small amount of bypass) is better than undersupply (which causes recirculation). A large oversupply of air is an energy opportunity, whereas a large undersupply of air will inevitably lead to considerable hot spots.

The next concern is the quality of segregation between hot and cold air streams. The metric defined as Air Segregation Efficiency = 100% (ideal) when there is zero bypass and zero recirculation. The important concept here is that we are trying to create a physical barrier between the cold and hot air streams, for which we use a containment system. Most containment systems (excluding butchers curtains) are very hermetic. The issue is regarding the other segregation areas which are not part to the containment system, such as the raised floor where you can have unsealed cable cut-outs and floor grilles in the hot aisles or the front of the rack where there is a lack of blanking panels between IT equipment and gaps at the sides.

Whilst I am all for containment, the reasons why it is being installed need to be fully understood and monitored in order for its objectives to be met.

Tags:  Cooling  Datacentre  Date Centre  efficiency  energy-efficient computing  ICT 

Share |
PermalinkComments (0)
 

EURECA website now live!

Posted By Kelly Edmond, 26 May 2015

The EURECA project is part of the European Commission Horizon 2020 innovation and research programme dealing with energy efficiency and market uptake and is specifically related to data centres and the public sector.

The project is focussed on providing coordination and support for the uptake of high energy performance data center related products and services within Europe’s Public Sector organisations. The aim is to provide Public Sector procurement teams with access to an online tool which incorporates all the industry’s best practices, performance indicators and metrics.

The project consortium is comprised of experts from the Data Centre Alliance, CBRE/Norland, Telecity, Carbon3IT, Green IT Amsterdam, Certios and Maki Consulting and is led by University of East London who are act as the project Coordinator. The total budget for the project is €1.5M.

The concept of the project is to make the complex world of data centre energy efficiency accessible to non-expert and easier to navigate. The EURECA tool captures data about the format and setup of the data centre along with which best practices have been deployed.  Which then helps identify the opportunities for energy savings, and provides a broad overview of the procurement actions that are needed to improve the environmental performance of the facility in question.

The website is now live and provides more details on the mission and the plan of work. Crucially, the website offers options to get involved and remain informed on the project’s progress. The website can be found at www.EURECA-project.eu


Tags:  Datacentre  efficiency  energy-efficient computing  EU Commission  Horizon 2020  public sector 

Share |
PermalinkComments (0)
 

Would you buy a pair of shoes that were made in a factory or from a ‘cobbler’

Posted By Paul Russell, Vertiv, 13 February 2015
https://www.youtube.com/watch?v=KUGKndg87m8

  Have You Ever Considered a Modular Data Center?

 

At a recent internal meeting I proposed the question: Would you buy a pair of shoes that were made in a factory or from a ‘cobbler’ (e.g. independent craftsman)? Needless to say, this caused some hilarity amongst my colleagues, but the question has been applied to all forms of consumer goods ever since the 1850’s from baked beans to cars. The answer is – it depends and it could go either way based on the need for customization or preference of hand-made goods. When goods are made by a group of experts carrying out repetitive tasks, in a controlled environment, they are usually less expensive to produce and the product quality is consistent. For hand-crafted goods, your shoes for example, you might have slight variations and they may be more expensive because of the time and personal attention needed to hand-craft them.  This analogy is reflective of the choices CIOs have when deciding whether to start from scratch in building a data center or to choose a modular solution.

In both cases learning, training and experience are always necessary.

Advantages of choosing modular

In fact, data centres have been made in a modular format for a number of years, from a number of vendors, as there are many attractive advantages. But let’s define modular first: “A manufacturing construction method to enable quick assembly of a complete structure, complete with all its services, in sections, within a controlled environment, that is then relocated to its permanent location”.

Normally the fabric for technical buildings is steel, but many fabrics can be used.

So why modular?

1. Build Speed – maybe this is the most attractive feature for clients. In fact, the modules can be designed, fabricated, assembled, fitted out and wired (both electrically and with communications cabling) while the foundations are being constructed on the client’s site. Air conditioning and electrical systems are all included and wired. Modules can even be fitted with toilets and just bolted together on site. Think about it!  – A project build that is no longer affected by weather or dependant on gangs of tradespeople all working together in a small space to achieve the end goal.

2. Quality – No longer is the quality of a project dependant on gangs of people who have never worked together before and who have never co-ordinated their functions before.

3. Fixed Price – Once the project is defined and agreed upon, the price can be agreed. The components are known, the build time is defined, transport costs are calculated and the client has a fixed price, a big advantage over a traditional build where many factors can affect the timeline of a build and therefore the final costs.

In fact there is another cost advantage with a modular build, if you are unsure of the size or capacity of your prospective data centre. With the ‘add-on’ approach, and the appropriate design, you can add modules to an existing build as your demand grows over time. So it is common for each module to be equipped with electrical distribution, that “plugs in” to the main system, independent standalone cooling systems, all with redundancy built in.

Each module can be equipped with the latest security features so that each module can be managed or staffed by independent organisations. Providing the cooling and electrical systems are identical, the site maintenance provider can service and “fault find” outside of the data space, if that is the design, on an individual module with ease.

Of course, vendors such as Emerson know their products and can incorporate all the latest technological techniques into a build and with one of the world’s largest teams of technical personnel, any client special requirements can be designed and delivered. So telephone exchanges, sub stations, combined UPS and generator enclosures, Solar Power transfer stations, cable distribution hubs, temporary data processing modules and even portable buildings are all possible.

Tier structures and PUe (or other metrics) can easily be designed into a modular system and it is much easier to modify or change parameters to amend PUe in a small module rather than a large hall. Using the modular build approach, almost anything is possible. So if you need a generator within the build, or a DC supply for your solar farm, or a telephone exchange and then want to combine this with a concrete render, or a terracotta tile exterior, flat roof, tiled roof, metal roof any combination is achievable. Even workshops and offices can be incorporated, complete with chairs desks and coffee machines!

But maybe the biggest benefit of a modular construction is the fact that it can be built at the factory, then tested and signed off by the client before his foundations are completed! Remember that this is with all the racks wired (in the case of a data centre) all the cooling or fresh air units working.  Offsite testing must be the biggest selling point you see what you get and prove it before it leaves the factory.

However it should be noted that modular cannot fit every situation. The main constraints are transport costs together with transport size and weight constraints. The cheapest form of transport (in the EU) is the standard 24 tonne three axle trailer forming the 40 tonne articulated truck. Within the EU directive 96/53/EC provides the relevant data for height and width constraints on the EU road network. As soon as you design a modular section that goes over these constraints the cost increases, as special trucks are required, with special teams to supervise movement. It is also important to obtain the best value for transport money by designing in the best weight and size ratio to a truck load. So if an area within the modular build is empty the best design might be to flat pack the walls, roof and floors, stack them on a truck and assemble them on site, rather than assemble them into four walls and a floor as a rigid construction.

Remember the shoes? Well just think – you try them on in the shop before taking them home! Just like an Emerson modular construction technical building!

Learn more about modular data centers by watching the video of the T-Systems project.

 Attached Thumbnails:

Tags:  Datacentre  Date Centre  efficiency  Modular  planning  power 

Share |
PermalinkComments (1)
 

Airedale International announces impending production launch of AireFlow™

Posted By Airedale International Air Conditioning, 07 November 2014
https://www.youtube.com/watch?v=J0PRuXclP5M

Airedale International announces impending production launch of AireFlow™ indirect fresh air free-cooling adiabatic air handling unit (AHU)


Leading British manufacturer of precision air conditioning systems, chillers, IT and comfort cooling solutions, Airedale International, will shortly be launching its production range of indirect fresh air free-cooling adiabatic air handling units (AHUs).

The AireFlow™ AHU is the result of close collaboration between the product development and engineering teams of both Airedale and UK market-leading manufacturer of custom-built air handling units, Barkell Ltd which became part of the Airedale group earlier in 2014.

With production launch set for autumn 2014, the AireFlow™ offers huge free-cooling potential and, being an indirect system, eliminates the risk of contaminated air entering the data centre. The use of fresh air as the predominant cooling source significantly reduces operational costs for users. In contrast with direct air handling units, indirect cooling solutions also reduce the dependency on back-up mechanical cooling required to prevent contaminated ambient air permeating the data centre. This ensures there is no requirement for internal air conditioning units, therefore maximising IT footprint.

The AireFlow™ will be available in five footprints between 100 and 440kW, each with two separate case sizes depending on whether roof or wall mounted connection is required.

High efficiency, electronically commutated (EC) centrifugal backward-curved fans draw return air from the data centre through the heat exchanger. Cooler air from the outside ambient is drawn through a separate air path within the heat exchanger, also by EC plug fans. This temperature difference drives heat exchange, with the supply temperature being managed through modulation of the ambient air flow rate. EC fan technology delivers improved efficiency, full modulation and reduced power consumption compared with AC fan equivalents, particularly at reduced running speeds.

At any point in the year, as climatic conditions dictate, moisture is added to the warm outdoor air which has the effect of lowering the dry bulb temperature. A typical UK peak summer day for example may have a dry bulb of 35°C with a wet bulb temperature of 21°C. By fully saturating the air, the dry bulb temperature can be reduced to 21°C. This lower air temperature is then used as a cooling medium and, based on London, UK ambient temperatures, could achieve ASHRAE recommended conditions using 100% free-cooling.

In more challenging environments, an optional mechanical cooling module will ‘top-up’ the cooling capacity with a partial DX supplementary cooling section.
An optional integrated fresh air inlet unit provides added installation benefits and reduced footprint compared with other makes of air handling unit in addition to maintaining room pressure and air quality. Air flow and pressure monitoring also allows filter and fan performance to be managed.

Other features of the AireFlow™ include: G3 filtration (return air), G4 filtration (ambient air intake), F7 filtration (fresh air inlet), optional contaminant filtration - NO₂, SO₂, H₂S (fresh air inlet), N+1 redundancy on EC fans and a highly intuitive touchscreen colour user display.

A fully working demonstration unit will be available on Airedale’s Stand No 312 at this year’s DatacenterDynamics, ExCel London ICC, 19 & 20 November 2014.

Watch the following cutting-edge animation that highlights the different operational modes of the AireFlow™ in varying ambient temperatures within a data centre application in the video attached

Tags:  air management  Cooling  Date Centre  efficiency 

Share |
PermalinkComments (0)
 

A guide to data centre metrics and standards for start-ups and SMBs

Posted By Anne-Marie Lavelle, London Data Exchange, 27 March 2014
Updated: 27 March 2014

A guide to data centre metrics and standards for start-ups and SMBs

Having made that choice to co-locate your organisation’s servers and infrastructure to a trusted data centre provider, companies need to be able to understand the key metrics and standards which they should use to evaluate and benchmark each data centre operator against. With so many terms to get to grips with and understand, we felt it necessary to address the most prevalent ones for data centres.

Green Grid has developed a series of metrics to encourage greater energy efficiency within the data centre. Here are the top seven which we think you’ll find most useful.

PUE: The most common metric used to show how efficiently data centres are using their energy would have to be Power Usage Effectiveness. Essentially, it’s a ratio of how much energy consumption is it going to take to run a data centre’s IT and servers. This would incorporate things like UPS systems, cooling systems, chillers, HVAC for the computer room, air handlers and data centre lighting for instance vs how much energy is going to run the overall data centre which would be taking into account monitor usage, workstations, switches, the list goes on.

Ideally a data centre’s PUE would be 1.0, which means 100% of energy is used by the computing devices in the data centre – and not on things like lighting, cooling or workstations. LDeX for instance uses below 1.35 which means that for every watt of energy used by the servers, .30 of a watt is being used for data centre cooling and lighting making very little of its energy being used for cooling and power conversion.

CUE: Carbon Usage Effectiveness also developed by The Green Grid complements PUE and looks at the carbon emissions associated with operating a data centre. To understand it better you look at the total carbon emissions due to the energy consumption of the data centre and divide it by the energy consumption of the data centre’s servers and IT equipment. The metric is expressed in kilograms of carbon dioxide (kgCO2eq) per kilowatt-hour (kWh), and if a data centre is 100-percent powered by clean energy, it will have a CUE of zero. It provides a great way in determining ways to improve a data centre’s sustainability, how data centre operators are improving designs and processes over time. LDeX is run on 100% renewable electricity from Scottish Power.

WUE: Water Usage Effectiveness simply calculates how well data centres are using within its facilities. The WUE is a ratio of the annual water usage to how much energy is being consumed by the IT equipment and servers, and is expressed in litres/kilowatt-hour (L/kWh). Like CUE, the ideal value of WUE is zero, for no water was used to operate the data centre. LDeX does not operate chilled water cooling meaning that we do not use water to run our data centre facility.

Power SLAs: Service Level Agreements is the compensation offered in the unlikely event that power provided by the data centre operator to a client as part of an agreement is lost and service is interrupted affecting your company’s business. The last thing your business wants is to have people being unable to access your company’s website and if power gets cut out from your rack for some reason, make sure you have measures in place.

Data centres refer to the Uptime Institute for guidance with regards to meeting standards for any downtime. The difference between 99.671%, 99.741%, 99.982%, and 99.995%, while seemingly nominal, could be significant depending on the application. Whilst no down-time is ideal, the tier system allows the below durations for services to be unavailable within one year (525,600 minutes):

  • Tier 1 (99.671%) status would allow 1729.224 minutes
  • Tier 2 (99.741%) status would allow 1361.304 minutes
  • Tier 3 (99.982%) status would allow 94.608 minutes
  • Tier 4 (99.995%) status would allow 26.28 minutes

LDeX has infrastructure resilience rated at Tier 3 status offering customers peace of mind that in the unlikely event of an outage–and therefore protecting your business. We like to operate closed control cooling systems in our facilities enabling us to operate tight environmental parameter SLA’s. LDeX operate SLA Cold Aisle Temperature parameters at 23degreesC +/- 3 degreesC and RH (Relative Humidity) 35% – 60%.

Some data centres run fresh air cooling systems which make it hard to regulate RH and quite often their RH parameters are 205 – 80% and beyond. This can lead to increased humidity in the data hall and has on occasion resulted in rust on server components or a low RH can produce static electricity within the data hall. Make sure you look into this and ask about it.

Understand the ISO standards that matter to your business

ISO 50001 – Energy management

Using energy efficiently helps organisations save money as well as helping to conserve resources and tackle climate change. ISO 50001 supports organisations in all sectors to use energy more efficiently, through the development of an energy management system (EnMS).

ISO 50001:2011 provides a framework of requirements for organizations to:

  • Develop a policy for more efficient use of energy
  • Fix targets and objectives to meet the policy
  • Use data to better understand and make decisions about energy use
  • Measure the results
  • Review how well the policy works, and
  • Continually improve energy management

ISO 27001 – Information Security Management

Keeping your company’s intellectual property should be a top priority for your business and ensuring that your data centre provider offers this sort of resilience is imperative. The ISO 27000 family of standards helps organizations keep information assets secure.

Using this will help your organization manage the security of assets such as financial information, intellectual property, employee details or information entrusted to you by third parties.

ISO/IEC 27001 is the best-known standard in the family providing requirements for an information security management system (ISMS). An ISMS is a systematic approach to managing sensitive company information so that it remains secure. It includes people, processes and IT systems by applying a risk management process.

It can help small, medium and large businesses in any sector keep information assets secure.

Like other ISO management system standards, certification to ISO/IEC 27001 is possible but not obligatory. Some organizations choose to implement the standard in order to benefit from the best practice it contains while others decide they also want to get certified to reassure customers and clients that its recommendations have been followed. ISO does not perform certification.

PCI DSS – Banks and businesses alike conduct a lot of transactions over the internet. With this in mind, the PCI Security Standards Council (SSC) developed a set of international security standards to ensure that service providers and merchants have payment protection whether from a debit, credit or company purchasing card. As of 1st January 2015, PCI DSS 2.0 will become mandatory. This will be broken down into 12 requirements ranging from vulnerability assessments to encrypting data. Make sure to ask if your data centre operator has this standard.

With the increased stakeholder scrutiny that has been placed on data centres, steps need to be put in place to make sure that the data centre operator that you are looking at choosing is aligning its strategy not only to some of these metrics and standards mention, but to other security, environmental governmental regulations that have been brought in.

Working for a successful data centre and network services provider like LDeX has enabled me as a relative newbie to the data centre industry, to get to grips with these terms to facilitate client understanding regarding where LDeX sits in comparison with our competitors.

Anne-Marie Lavelle, Group Marketing Executive at LDeX Group

Tags:  connectivity  Cooling  CUE  data centre  Datacentre  Date Centre  efficiency  ISO standards  operational best practice  PCI DSS  PUE  WUE 

Share |
PermalinkComments (0)
 

Energy Efficiency & Sustainability Workshop Reminder

Posted By Kelly Edmond, 17 February 2014

 

 

A reminder to all members that the DCA are holding an Energy Efficiency & Sustainability Workshop which is being held at the University of East London. This will be on Tuesday 25th February. 

 

The purpose of this workshop is to discuss/review the following ISO standards and EU Code/EN updates. All members are welcome to contribute.
 

If you are interested in attending please go to our Event Calendar for more information and RSVP. Please feel free to contact me if you have any questions regarding this. We hope to see you there!


Kelly Edmond

DCA Membership Executive

kellye@datacentrealliance.org 

Tags:  efficiency  London  university of east london 

Share |
PermalinkComments (0)
 

What energy efficiency metrics to use?

Posted By Barry Paton, 05 December 2013

Dear all,

I'm trying to answer that question as part of my masters degree with the Open University.

The aim of the research project is to design a state-of-the-art monitoring dashboard that will help to improve energy efficiency in data centres.

I've created a survey to gather information on metrics from industry experts.

The survey can be accessed here: http://eSurv.org?s=OCHJMM_3e884f22

Please take a few minutes to share your knowledge and experience.

All participants will receive a copy of the final thesis on request. Be assured that no personal data will be reported in the results.

Thanks and Best Regards,

Barry

Tags:  efficiency  energy-efficient computing  PUE 

Share |
PermalinkComments (0)
 

Own or Out-source, Build or Buy?

Posted By Steve Hone, 01 August 2012

These are questions I am often asked, I came across this article the other day on the subject and wanted to share it with you, Posted originally by Nicholas Greene a year or so ago but still very relivent today……..

For one reason or another, your enterprise organization's got some big computing needs right over the horizon. Maybe you're setting up a new consumer payment or accounts management platform. Maybe you've just developed the next best online game, and you need servers to host it. Maybe you just need some additional storage. Whatever the reason, you're gonna need a Data Center. One question remains, though- should you outsource, or build?

Constructing a Data Center's no mean task, as you well know- it's a positively herculean undertaking which brings with it overwhelming costs and an exhaustive time commitment just to construct it- never mind maintaining it after the fact. If you're going to build a Data Facility, you'd better make damned sure your business can handle it. If you don't, you'll flounder- it's simply reality.

There are a lot of things you have to consider- cost and budget, employees, time constraints…you know the drill. Today, we're going to take a closer look at the first entry on the list-the reason for setting up a facility- and use it as a springboard in determining when you should outsource, and when the management of a facility should be placed solely in your organization's hands.

Ultimately, you have three choices- it comes down to whether or not you want to outsource to a multi-purpose data vendor, construct your own purpose-built facility, or hire a contractor to custom-tailor a facility for you. Before we even get started, I'm going to say right out the door that most businesses are better off going with the first or third option.

To determine what choice is right for you, there are a few things you should consider. What does your business do? What shall you be using the facility for, and how intensive will your needs be? How important are the tasks you require the facility for? Are they key components of your business strategy, or of one leg of your corporation?

What your business does can play a considerable role in determining whether or not you'll run your own servers. Is your organization solely based in the technology sector, or is your primary area of expertise in finance. Are you a hardware or software vendor, or do you primarily sell consumer products? How large is your IT department? How well-funded are they? All of these questions should be taken into account, as they can very well help determine right out the door if your business is even capable of managing its own facility without some significant restructuring, let alone building one.

Of course, that's only the first thing you need to consider- what your organization does by no means restricts it from constructing its own centers- Facebook's a prime example of this. Of course, in their case, they have their own reasons for building their own servers- they are, after all, the world's largest and best-known social network.

As I've already stated, what you need the facility for also plays a very important role. If you are, for example, a cloud-based SAAS vendor, it should go entirely without saying that you should be building and managing your own facility. As a general rule, if you expect to turn a significant profit from your facility, or the need met by the facility comprises a key aspect of your business model, you should look at running your own- or, at the very least, get yourself a custom-built data center.

Bandwidth goes hand in hand with purpose. How many gigabytes of data is your product or service going to use? How will the costs of development and management stack up against the fees you'd be paying if you outsourced? Will you turn enough of a profit to merit setting up your own facility?

Do you foresee yourself needing to expand your facility in the future? How will you do it? Scalability's an important concern, and if your business can't afford to expand- or virtualize- outsourcing might be the answer. Size matters, folks, and the smaller your business, the more likely you are to need to contract out, rather than run your own center.

Finally, there's your staff. Is it more economical to train and hire a whole team of new employees, or simply contract out to an organization to manage things for you?

Every business is different, and not all organizations are built equal. What I've listed here, all of the information; it's little more than a guideline. Ultimately, the choice of whether or not to outsource rests entirely with you.

Tags:  building  central  comms  cooling  Date Centre  efficiency  Location  planning  PUE 

Share |
PermalinkComments (0)
 

Why Google’s Data Centre is Not Like Yours

Posted By Steve Hone, 01 August 2012

Does your company operate a data centre? It's likely you do, since any organisation that is large enough needs to have its own servers. Yet while you probably purchase from Dell or HP and throw them into a room with a whole bunch of temperature control units, Google decided long ago to rethink the concept of the data center.

They started this process with the servers themselves. They purchase all of their own components and build the servers from scratch. Why? Because the company feels that they can make a better server unit that fits their needs. Instead of a typical server, you get something that looks more like a homemade PC project.

There are tens of thousands of these custom-built severs located around the world. When you do a search or use any Google product, the company takes your IP address and routes you to their closest data centre in order to provide the highest speed (lowest latency) possible. The company has realised the correlation between speed and customer satisfaction and has therefore built enough data centres to accommodate.

The data centres are also configured differently. In order to optimise space and cooling needs, the company packs servers into shipping unit that are then individually cooled. Google's experts have determined that this is the best way to efficiently economise.

Take a look at this tour of such a facility. http://youtu.be/zRwPSFpLX8I

So why are Google's data centres so special? The answer is efficiency. The company uses a ton of power to keep these servers running and doing so at an optimal temperature. The company tries to locate these facilities near hydroelectric power because of its lower cost. It also explains why Google has such an interest in renewable energy and last year entered into a twenty year agreement to buy wind power. The company knows that its power needs are going to increase over time, and this is a way to hedge the fluctuations in energy prices over the years.

Tags:  building  central  comms  cooling  data  Date Centre  efficiency  Location  PUE 

Share |
PermalinkComments (0)
 

10 Data Centre Efficiency Roadblocks

Posted By Steve Hone, 01 August 2012

As you all well know, efficiency's the word of the year. All the big businesses in the data market have been butting heads, each trying to come up with the greenest, most efficient, most intelligent solution to their energy and data woes. Everyone buzzes about the latest innovation, the newest piece of hardware, the most recently tread path towards efficient data. Everyone talks about this new application for optimizing power draw, or that new cooing system design.

 

It's easy to get lost in the hype, to get so caught up in talking about efficiency that one forgets about one key element:

For any of these new measures to work in your data centre, you've got to implement them first. And that, is where you're probably going to need some aspirin.\

Outdated Infrastructure: Truthfully, this one goes without saying. If your hardware's outdated, or you're using a floor plan that isn't conducive to efficiency, well…you're going to have to change it if you want to start seeing bigger and better savings. Unfortunately, that's easier said than done- and such an overwhelming task might often leave you defeated before you even begin.

Lack of Proper Training: Efficiency doesn't start with the hardware or the software- it starts with the employee. If your staff understands best practices for green computing and are well-versed in the steps involved in improving one's energy efficiency, you're in the clear. However, if you haven't directly coached your staff on concepts like green computing and efficient hardware design, you might well stumble over this block.

Failure to Prioritize: If you're going to implement an efficiency solution, you're not going to be able to do it all at once. Figure out the areas that most desperately need improvement first, and tackle them one by one. While you certainly need an idea of the big picture, you should try to solve all the issues in the picture- rather, you should work your way up to efficiency by looking at all the smaller factors involved.

Finances: As with everything else in the business, money is an object. While you'll certainly make the funds back in the long run, implementing a full solution for efficiency is a costly, time-consuming effort. If your budget's not up to it, there's really not all that much you can do except hope to try again once it is.

Implementation and Uptime: If you're planning on a complete overhaul of your application and hardware infrastructure, there's naturally going to be a bit of downtime involved- which, ultimately, will end up costing you even more money. Properly managing this downtime is absolutely vital if you want to keep yourself afloat.

Difficulty of Implementation: Not all Data Centres are constructed equal- matter of fact; almost every Data Centre is different. Some facilities are going to have a lot more trouble implementing power and application efficiency protocols than others. This difficulty can be positively nerve-wracking, and may well be more than some operators are willing to suffer through.

Inefficient App Design: Software's just as important as hardware when you've an eye on efficiency. Management platforms can help keep track of power distribution and server downtime, while well-coded applications that make use of virtualization can considerably reduce the stress placed on a server. On the flip side, a poorly designed, convoluted application can stretch servers to their limit, causing them to simultaneously draw more power and wear down more quickly.

The Metrics: Understanding the metrics, and knowing which metrics work for you, is critical in the day-to-day operation of a Data Centre. With that in mind, not understanding the metrics- or making use of metrics that have no bearing on your facility- can be just as bad as not using them at all- sometimes worse.

The Power Grid: Unfortunately, where your facility is located has a marked impact on both energy costs and energy efficiency. Older wiring might result in more power being required to run certain key systems, while inefficient methods for powering an area could drive up costs.

Lack of Initiative: Compared to this one, none of the other obstacles on the list are even pertinent. As with anything else, you can't make your Data Centre more efficient if you do nothing. Take charge. Look into what you're doing, and what you could be doing. Try to work out what needs to change if your facility's going to become more efficient. Don't be content to simply run your facility- manage it.

 

 

 

Posted Compliments ofNicholas Greene

Tags:  central  data  Date Centre  DC  efficiency 

Share |
PermalinkComments (1)
 
Sign In
Sign In securely
News

Data Centre Alliance

Privacy Policy