Print Page   |   Contact Us   |   Report Abuse   |   Sign In   |   Join the DCA
DCA Member Search
Event Calendar

07/06/2017 » 28/12/2017
Training & Education Courses in 2017

18/10/2017
Westminster Forum - Employment Forum

07/11/2017 » 08/11/2017
Zettastructure

07/11/2017
Westminster Forum - Energy

15/11/2017 » 16/11/2017
Data Centre World Paris 2017

Top Contributors

DCA Member's Blog
Blog Home All Blogs
Please post data centre industry comments, experiences, ideas, questions, queries and goings-on here - to stay informed with updates, ensure you are subscribed by clicking the "subscribe" button.

 

Search all posts for:   

 

Top tags: Date Centre  Datacentre  efficiency  EMKA UK  central  Cooling  data  data centre  London  cloud  data centre security  Location  pue  swinghandles  connectivity  EMKA  energy-efficient computing  LDeX Group  air management  anti contamination  BioLock  data centre cleaning  data centre security;  disaster recovery  EU CODE of Conduct  infrastructure  planning  power  university of east london  building 

Design Your Own Micro Data Centre in Under 1 Minute for Free Online

Posted By Richard Warren, Workspace Technology Limited, 13 December 2016

Workspace Technology Launch Free Web Based Design Tool that Allows You to Design Your Own Micro Data Centre in Under 1 Minute

Workspace Technology, UK data centre design and construction specialists, have launched a new free-to-use online design tool that grants users the ability to design their own edge/micro data centre.

The new design tool works by giving users step-by-step parameter and equipment options, such as ‘Number of Racks’ and ‘Type of Cooling’, with the final outcome producing a 3D rendered model of their data centre creation. Additionally, the design tool also creates a virtual guided tour video of the newly created infrastructure which can be watched and downloaded at the end of the process by the user.

Roy Griffiths, Technical Director at Workspace Technology, commented “The creation of our ‘Design Your Own Data Centre Tool’ enables data centre managers to produce and visualise customised Micro Data Centre designs in less than a minute.”

We believe it is important for IT Professionals to be able to visualise their ideas and concepts, and this free online design tool will assist and enhance that process.”

To access the design tool and create your own modular data centre visit www.workspace-technology.com/design-your-own-data-centre

 

For more information about Workspace Technology Ltd please visit http://www.workspace-technology.com/ or contact Richard Warren on +44 (0) 7795 225533 or via email richard.warren@workspace-technology.com

 Attached Thumbnails:

Tags:  Date Centre  Design  Micro Data Centre 

Share |
PermalinkComments (0)
 

Air-side free-cooling: direct or indirect systems and their impact on PUE

Posted By Robert Tozer, Operational Intelligence Limited, 01 October 2015

There is a general perception that direct air systems are more cost efficient and hence the default option. However, providing the design incorporates good air segregation, recommended ASHRAE equipment environmental conditions and adiabatic cooling of the outdoor air, for most cities warmer than London, the indirect air systems are considerably more efficient than the direct air systems. This is because many more hours of free cooling can be achieved with adiabatic cooling without affecting the indoor conditions. Furthermore, zero-refrigeration in most of the world is possible with this solution. For cooler climates, direct systems are only marginally more efficient.

Often when data centre free cooling is discussed, people assume this means direct fresh air cooling. However, in climates warmer than London, indirect air systems are more efficient than direct air systems and can allow refrigeration to be eliminated and considerably reduce the electrical plant sizing requirements. Use of adiabatic evaporative cooling on the outdoor airstream allows free cooling to be achieved for many more hours in the year when there are hot, dry conditions. Further detail on the application for free cooling in data centres is available in our technical papers.

 

Tags:  air management  Cooling  Date Centre  Location  London  pue 

Share |
PermalinkComments (0)
 

Containment Systems: a disappointing outcome!

Posted By Robert Tozer, Operational Intelligence Limited, 17 September 2015

I have visited too many sites where I am assured that air management is well under control because an air containment system has been installed, only to find disappointing results in many cases. The objective of containment is to segregate the hot and cold air streams, to minimise recirculation (exhaust IT equipment hot air re-entering intake) and bypass (cooled CRAH air not making it to IT equipment and returning directly to the cooling units). There are two fundamental issues: firstly, even with a perfect containment system, the amount of air supplied needs to be controlled to satisfy all IT equipment requirements, and secondly, containment is only part of the segregation solution.

There is a very simple way to check if there is enough cold air being supplied.  In the case of cold aisle containment, open the cold aisle door slightly and verify the air flow direction with a sheet of paper. If air is coming out of the cold aisle, there is a slight oversupply of air to the cold aisle (which is fine), however, in many cases hot air is entering the cold aisle, which means that there is insufficient air supplied, making recirculation inside the cold aisle inevitable. This can be due to incorrect type and number of floor tiles or simply insufficient air volume from the CRAHs. There are some very simple and rapid methods / metrics to diagnose air management based on temperatures, etc. such as Af (Availability of flow) which is the ratio of CRAH air volume to IT equipment air volume. Normally a slight oversupply of air (with a small amount of bypass) is better than undersupply (which causes recirculation). A large oversupply of air is an energy opportunity, whereas a large undersupply of air will inevitably lead to considerable hot spots.

The next concern is the quality of segregation between hot and cold air streams. The metric defined as Air Segregation Efficiency = 100% (ideal) when there is zero bypass and zero recirculation. The important concept here is that we are trying to create a physical barrier between the cold and hot air streams, for which we use a containment system. Most containment systems (excluding butchers curtains) are very hermetic. The issue is regarding the other segregation areas which are not part to the containment system, such as the raised floor where you can have unsealed cable cut-outs and floor grilles in the hot aisles or the front of the rack where there is a lack of blanking panels between IT equipment and gaps at the sides.

Whilst I am all for containment, the reasons why it is being installed need to be fully understood and monitored in order for its objectives to be met.

Tags:  Cooling  Datacentre  Date Centre  efficiency  energy-efficient computing  ICT 

Share |
PermalinkComments (0)
 

Smoke Signals

Posted By Marc Marazzi, Server Technology Inc., 13 February 2015
Updated: 13 February 2015

Humans have always had an inherent desire to communicate. I think whether we realise it or not, our species has flourished because of the technology enhancements that has allowed us to speed up and improve our communication capability.

Everything of what we see and use today, is simply an upgrade of something that was there before. I think we can go back pretty far and compare communication technology of today with communication technology of yesterday. And I'm talking way back. 

The first tweet - smoke signals 

Like the current tweet, you couldn't put much in to a smoke signal. You had to be frugal with your message due to the way you constructed the 'words'. Admittedly the tweets wouldn't have been as fun or witty as they are now, "Lindsey Lohan caught stealing pair of slippers in discount store #thief #serialoffender" and more information based, like "mammoth killed. Cooking now. Dinner at 8. #hungry" is a possible smoke signal, but the point is that humans were putting out short messages early in our history. 

The first Facebook page - cave paintings (palaeontology)

Yes, I went there. And Why not? Cave paintings were nothing more than static profile updates from cavemen. Instead of updating their post that said "I killed some springbok then got chased by a mountain lion", they had to tell us that as a non-updateable cave painting. 

First GPS - Leaving markings on trees and rocks. So with this one I was trying to think of how far back I could go. I got to compass and then I thought- what did people do before that?

 I suspect a hunter/gatherer (sales manager / channel manager - just kidding) would mark territories to help them find their way back home, or to certain locations like an area of water or food.

And I am sure some of you could think of more, but the interesting discussion I had (with luca, my 10yr old son) is what can come next? What developments will humans make to these technologies and communication tools to improve what we have now?

For me, I'd see speed and power to be the best areas for improvement. Speed- faster processors, faster access to websites, download speeds and overall experience are areas that are constantly worked on but could get better. 

For power- well it's 10am and I'm typing this on my phone (won't say what it is but it's name is shared with a popular fruit. No, not blackberry). And my battery is already down to 29%. And I haven't really done much except respond to emails and upload the odd Instagram picture (marazzi73 #followforfollow etc). I think the leader of the smartphone will be the one who works out how to make batteries last much longer

If you have a smartphone, you will have at least 1 app that uses DataCentres globally. Even if you never install a single app in to your phone, all of your updates for that phone will come from any number of data centres in the world. And if you have a Facebook account, that will be stored in around 7 separate locations in the world.

So if speed and power are dramatically improved, I would say this will drive the next generation of data centres, storage and need for rack level power management. I think people would add more to their smartphones, use them for photos, updates and communicating much more, and this will drive an increase in processing power and storage in data centres, which will drive the need to understand more about what power is being consumed in each cabinet and pick up early if core application services may be at risk. 

We (humans) have less and less patience on how long it takes us to get what we want. Can you imagine having to read a map and planning your journey? Are you prepared to take the amount of time that took? 

Would we be prepared to scrape in to a wall our Facebook status, and DRAW the photos we take so easily nowadays? Not a chance. 

As humans, we do not suffer 'tools' gladly. Any phone app or service that fails and doesn't provide us what we need- we delete it. "Oh, I can't access your welcome page?" Gone. "Your servers are down- try again later?" Bye. We just don't care because there will be another app that can do pretty much the same thing.

So, as a consumer, you will reap the benefits that fickleness brings. For the rest of us that have to deliver that service to you in some form- good luck to us all. 

If you need me, I'll be in a cave scrawling this post somewhere ;) 

Marc Marazzi

 

 

Server Technology will be exhibiting their Intelligent PDUs at Data Centre World in the UK on March 11th and 12th at Stand C65A, so come on by to say hello 

If you have a good example of something old that you can relate to something new (Sony Walkman - iPod, for example)- send it in to us at Servertech. Each one will receive a USB key shaped like one of our HDOT pdus! Trust me, they are cool.

Tags:  datacenter  Date Centre  DCIM  pdu 

Share |
PermalinkComments (0)
 

Would you buy a pair of shoes that were made in a factory or from a ‘cobbler’

Posted By Paul Russell, Vertiv, 13 February 2015
https://www.youtube.com/watch?v=KUGKndg87m8

  Have You Ever Considered a Modular Data Center?

 

At a recent internal meeting I proposed the question: Would you buy a pair of shoes that were made in a factory or from a ‘cobbler’ (e.g. independent craftsman)? Needless to say, this caused some hilarity amongst my colleagues, but the question has been applied to all forms of consumer goods ever since the 1850’s from baked beans to cars. The answer is – it depends and it could go either way based on the need for customization or preference of hand-made goods. When goods are made by a group of experts carrying out repetitive tasks, in a controlled environment, they are usually less expensive to produce and the product quality is consistent. For hand-crafted goods, your shoes for example, you might have slight variations and they may be more expensive because of the time and personal attention needed to hand-craft them.  This analogy is reflective of the choices CIOs have when deciding whether to start from scratch in building a data center or to choose a modular solution.

In both cases learning, training and experience are always necessary.

Advantages of choosing modular

In fact, data centres have been made in a modular format for a number of years, from a number of vendors, as there are many attractive advantages. But let’s define modular first: “A manufacturing construction method to enable quick assembly of a complete structure, complete with all its services, in sections, within a controlled environment, that is then relocated to its permanent location”.

Normally the fabric for technical buildings is steel, but many fabrics can be used.

So why modular?

1. Build Speed – maybe this is the most attractive feature for clients. In fact, the modules can be designed, fabricated, assembled, fitted out and wired (both electrically and with communications cabling) while the foundations are being constructed on the client’s site. Air conditioning and electrical systems are all included and wired. Modules can even be fitted with toilets and just bolted together on site. Think about it!  – A project build that is no longer affected by weather or dependant on gangs of tradespeople all working together in a small space to achieve the end goal.

2. Quality – No longer is the quality of a project dependant on gangs of people who have never worked together before and who have never co-ordinated their functions before.

3. Fixed Price – Once the project is defined and agreed upon, the price can be agreed. The components are known, the build time is defined, transport costs are calculated and the client has a fixed price, a big advantage over a traditional build where many factors can affect the timeline of a build and therefore the final costs.

In fact there is another cost advantage with a modular build, if you are unsure of the size or capacity of your prospective data centre. With the ‘add-on’ approach, and the appropriate design, you can add modules to an existing build as your demand grows over time. So it is common for each module to be equipped with electrical distribution, that “plugs in” to the main system, independent standalone cooling systems, all with redundancy built in.

Each module can be equipped with the latest security features so that each module can be managed or staffed by independent organisations. Providing the cooling and electrical systems are identical, the site maintenance provider can service and “fault find” outside of the data space, if that is the design, on an individual module with ease.

Of course, vendors such as Emerson know their products and can incorporate all the latest technological techniques into a build and with one of the world’s largest teams of technical personnel, any client special requirements can be designed and delivered. So telephone exchanges, sub stations, combined UPS and generator enclosures, Solar Power transfer stations, cable distribution hubs, temporary data processing modules and even portable buildings are all possible.

Tier structures and PUe (or other metrics) can easily be designed into a modular system and it is much easier to modify or change parameters to amend PUe in a small module rather than a large hall. Using the modular build approach, almost anything is possible. So if you need a generator within the build, or a DC supply for your solar farm, or a telephone exchange and then want to combine this with a concrete render, or a terracotta tile exterior, flat roof, tiled roof, metal roof any combination is achievable. Even workshops and offices can be incorporated, complete with chairs desks and coffee machines!

But maybe the biggest benefit of a modular construction is the fact that it can be built at the factory, then tested and signed off by the client before his foundations are completed! Remember that this is with all the racks wired (in the case of a data centre) all the cooling or fresh air units working.  Offsite testing must be the biggest selling point you see what you get and prove it before it leaves the factory.

However it should be noted that modular cannot fit every situation. The main constraints are transport costs together with transport size and weight constraints. The cheapest form of transport (in the EU) is the standard 24 tonne three axle trailer forming the 40 tonne articulated truck. Within the EU directive 96/53/EC provides the relevant data for height and width constraints on the EU road network. As soon as you design a modular section that goes over these constraints the cost increases, as special trucks are required, with special teams to supervise movement. It is also important to obtain the best value for transport money by designing in the best weight and size ratio to a truck load. So if an area within the modular build is empty the best design might be to flat pack the walls, roof and floors, stack them on a truck and assemble them on site, rather than assemble them into four walls and a floor as a rigid construction.

Remember the shoes? Well just think – you try them on in the shop before taking them home! Just like an Emerson modular construction technical building!

Learn more about modular data centers by watching the video of the T-Systems project.

 Attached Thumbnails:

Tags:  Datacentre  Date Centre  efficiency  Modular  planning  power 

Share |
PermalinkComments (1)
 

Airedale International announces impending production launch of AireFlow™

Posted By Airedale International Air Conditioning, 07 November 2014
https://www.youtube.com/watch?v=J0PRuXclP5M

Airedale International announces impending production launch of AireFlow™ indirect fresh air free-cooling adiabatic air handling unit (AHU)


Leading British manufacturer of precision air conditioning systems, chillers, IT and comfort cooling solutions, Airedale International, will shortly be launching its production range of indirect fresh air free-cooling adiabatic air handling units (AHUs).

The AireFlow™ AHU is the result of close collaboration between the product development and engineering teams of both Airedale and UK market-leading manufacturer of custom-built air handling units, Barkell Ltd which became part of the Airedale group earlier in 2014.

With production launch set for autumn 2014, the AireFlow™ offers huge free-cooling potential and, being an indirect system, eliminates the risk of contaminated air entering the data centre. The use of fresh air as the predominant cooling source significantly reduces operational costs for users. In contrast with direct air handling units, indirect cooling solutions also reduce the dependency on back-up mechanical cooling required to prevent contaminated ambient air permeating the data centre. This ensures there is no requirement for internal air conditioning units, therefore maximising IT footprint.

The AireFlow™ will be available in five footprints between 100 and 440kW, each with two separate case sizes depending on whether roof or wall mounted connection is required.

High efficiency, electronically commutated (EC) centrifugal backward-curved fans draw return air from the data centre through the heat exchanger. Cooler air from the outside ambient is drawn through a separate air path within the heat exchanger, also by EC plug fans. This temperature difference drives heat exchange, with the supply temperature being managed through modulation of the ambient air flow rate. EC fan technology delivers improved efficiency, full modulation and reduced power consumption compared with AC fan equivalents, particularly at reduced running speeds.

At any point in the year, as climatic conditions dictate, moisture is added to the warm outdoor air which has the effect of lowering the dry bulb temperature. A typical UK peak summer day for example may have a dry bulb of 35°C with a wet bulb temperature of 21°C. By fully saturating the air, the dry bulb temperature can be reduced to 21°C. This lower air temperature is then used as a cooling medium and, based on London, UK ambient temperatures, could achieve ASHRAE recommended conditions using 100% free-cooling.

In more challenging environments, an optional mechanical cooling module will ‘top-up’ the cooling capacity with a partial DX supplementary cooling section.
An optional integrated fresh air inlet unit provides added installation benefits and reduced footprint compared with other makes of air handling unit in addition to maintaining room pressure and air quality. Air flow and pressure monitoring also allows filter and fan performance to be managed.

Other features of the AireFlow™ include: G3 filtration (return air), G4 filtration (ambient air intake), F7 filtration (fresh air inlet), optional contaminant filtration - NO₂, SO₂, H₂S (fresh air inlet), N+1 redundancy on EC fans and a highly intuitive touchscreen colour user display.

A fully working demonstration unit will be available on Airedale’s Stand No 312 at this year’s DatacenterDynamics, ExCel London ICC, 19 & 20 November 2014.

Watch the following cutting-edge animation that highlights the different operational modes of the AireFlow™ in varying ambient temperatures within a data centre application in the video attached

Tags:  air management  Cooling  Date Centre  efficiency 

Share |
PermalinkComments (0)
 

Reminder - Anti-Contamination Group Workshop

Posted By Kelly Edmond, 24 July 2014
Hi all, 

This is to remind you of the DCA Anti-Contamination Group Workshop which will be held next week Thursday 31st July. If you would like to find out more or attend this workshop all details can be found on the Event Calendar here

This is the annual review meeting for this specialist steering group. The group's aim is to discuss, advise and recommend practical solutions to the members of the Data Centre Alliance on the control of dust, dirt and contamination of the data centre. In particular, preventing damage to equipment; loss of data and conservation of energy. This workshop is to gain your views on what collaborative action(s) are required by the DCA. All members are welcome to contribute.

Please feel free to contact me if you have any questions

Best regards

Kelly Edmond
Membership Executive

kellye@datacentrealliance.org 
0845 8734587

Tags:  anti contamination  Date Centre  docklands  dust  planning  university of east london 

Share |
PermalinkComments (0)
 

Disaster Recovery Q & A with Patrick Doyle, Director at LDeX

Posted By Anne-Marie Lavelle, LDeX Group, 08 July 2014

Disaster Recovery Q & A with Patrick Doyle, Director at LDeX

Patrick Doyle, COO at London based colocation and network connectivity provider LDeX, shares his thoughts about why organisations should opt for colocation over keeping their disaster recovery in-house for business continuity purposes.

First of all, what do you see as the main reasons for organisations opting for colocation over having an in-house facility which caters for its disaster recovery needs?

There are many reasons for choosing colocation over keeping disaster recovery in-house:

1)    Scalability

Businesses opt for colocation based services due to the scalability benefits of having a datacentre facility which caters to the client’s expanding customer base. Datacentres have extensive capacity to be able to cope with large increases in the number of servers, infrastructure and systems which need to be installed, connected and maintained at a moment’s notice. This is something which is not always feasible in an in-house facility.

2)    Stronger bargaining power to negotiate energy pricing

Datacentres have stronger bargaining power to negotiate prices on energy pricing for power, cooling and critical load protection. LDeX for instance has recently fixed prices for a three year period so that customers are able to obtain colocation without the heavy price tag which would usually go hand in hand with data storage.

3)    Security and 24×7 support

Having your systems stored on site poses a tremendous risk to the organisation in that if the systems go down or there is a fire in the building, the business may be offline for a substantial period of time which would lead to decreasing revenues and profit margins. By storing your systems offsite in a location which is away from a flight path or a flood plain which is safe, robust and secure with perimeter control, companies can be assured that there will be 100% uptime and that there are datacentre engineers and security measures such as facial recognition monitors and mantraps to ensure your company’s online presence is protected at all times. LDeX owns its own datacentre facility which is situated on Staples Corner, beside the M1, giving customers access to the site without being in the centre of London.

4)    On hand expertise

Datacentre specialists are on hand 24 x 7 to provide support when needed within a timely period so that in the unlikely event that something does happen, it is being dealt with by engineering experts. At LDeX, the data centre, network engineering and senior management team are on hand on a 24x7x365 basis giving clients reassurance that their business is our priority.

These services are critical and clients like to have that extra support to have advice about how to mitigate against potential network attacks such as DDoS attacks. Having that specialist expertise at the right moment is priceless when you compare it to how much your company would lose if something happened to your business which affected your revenue stream.

5)    Full control over your systems

Colocation offers the business a way of installing its equipment into a resilient datacentre with the ability to have full control over its IT systems. It means that the business can focus on what it does best while the datacentre provider looks after the backend and its business continuity.

6)    Minimal risk of construction charges

Finally, locating your infrastructure within a carrier neutral facility such as LDeX means that the business has an array of network providers to choose from which minimises the risk of excess construction charges.

With over 20 years’ experience in the industry, what do you see as the specific considerations for choosing a colocation provider for disaster recovery?

Access to the support team

Having a support team which has the experience and knowledge to cope with any situation which may arise in a calm and reasonable way is paramount in being able to deliver on tight SLAs. When disaster recovery is required, the onsite data centre support team should be able to react on behalf of the client in a promptly manner. Clients often appreciate that the advice that we offer on how this problem should be dealt with and give updates on what work is being done as and when it happens.

As mentioned, datacentre operators have rigorous physical and network support using a multi-tiered approach in order to mitigate any breaches which might arise otherwise on an onsite location. Datacentre facilities are not only monitored on a 24×7 basis via an offsite security company, but security staff on hand would have set procedures to follow to alert police automatically. Facial recognition monitors, tags, mantraps and laser trip wires are just some of the additional measures which are in place acting as a deterrent to intruders.

Proximity to the client’s site

Clients often have different preferences and requirements with regards to how close they would like to be in proximity to the datacentre which they use for their business services. Some clients prefer to be a reasonable distance away from the client’s site in order to provide protection from a major disaster. You also get clients who like to be in proximity to the colocation facility for ease of access.

Are there any hidden costs associated with colocation that users need to be aware of?

The norm would be that all colocation related costs which arise post contract are usually well documented. From time to time, there are usually some services which arise on an ad hoc basis such as remote hands or scheduled engineering, copper and fibre cabling work which needs to be availed of when the client cannot get to the facility within a specific time period. Although this is charged on top of the standard service, the client is always given options in these instances.

Is there a certain point where enterprises are basically too big to use a colocation provider for disaster recovery?

At the moment, there is a lot of consolidation taking place in the market place where large global companies are consolidating their regional disaster recovery from multiple data centres into one large datacentre. LDeX has recently signed contracts with a two US based clients which are currently doing this.

Do colocation providers offer visibility into systems remotely? Or is that just something you have to set up on your own, or assign staff to work at the remote location?

Datacentre providers typically offer visibility into power utilisation per power feed so a client can see how much power they are using at any given time. The clients IT systems are usually managed by the client which is one of the main unique selling points of colocation.

Patrick Doyle, COO at LDeX Group

Tags:  connectivity  Datacentre  Date Centre  disaster recovery  LDeX Group  London 

Share |
Permalink
 

Multi skilling in the Data Centre Industry

Posted By Anne-Marie Lavelle, LDeX Group, 12 June 2014

 Multi skilling in the Data Centre Industry

The ‘skills gap’ has unfortunately become synonymous with the technology industry with 2.5 million IT jobs expected to go unfulfilled by 2020 according to a recent infographic from IT recruitment firm SmartSource. For clients dependent on having low latency resilient connectivity to their servers, infrastructure and systems at all times, not to mention reliable data centre and network support, this is a particularly worrying fact.

Staff need to be able to have the adequate skills, knowledge and expertise in order to deal with any client query that may arise and be able to cope and resolve problems promptly in the unlikely event of a power outage. As 75% of data centre downtime is caused by some sort of human error, I felt it necessary to put this blog together.

Lack of experience

Many technicians who I see looking to work in the data centre environment don’t possess the necessary experience with servers, electrical and operating systems that one would expect or else they are not familiar with the different types of equipment which needs to be installed and worked on. Frankly, this is not acceptable making the search for high calibre engineers all the more difficult.

The necessity to up-skill and retrain

In the current climate, there is a reliance on existing staff to up-skill and attend additional courses in line with the latest developments. Technology has moved on at a rapid pace outpacing many of those working in the industry signalling the need to develop existing skill sets. Whether it is a short course in data centre design or an MA in a specialised area such as wireless networking, it is paramount to keep up to date in order to progress to a management position in the data centre.  I find that by staff going on to attend and pass these courses showcases a strong aptitude and enthusiasm to their employer for self-development and career progression. What is promising to see is that 57% of IT industry firms intend to train existing staff in order to address these requirements.

Combining technical aptitude with relevant experience

Another positive development is that the UK government has taken light of the situation by allocating much needed investment to programmes and courses focused on Science, Technology, Engineering and Maths (STEM) in recent years giving students the necessary knowledge and aptitude to work in a high tech environment. These courses coupled with the increasing number of internship programmes that are regularly offered will make candidates more marketable possessing everything that they need to succeed.

The LDeX viewpoint

At LDeX Group, the team is regularly trained to cope with the increasing demands that are expected of the company. On the data centre side, technicians would either have trained or be completing technical courses such as CompTia, MTA, CCNA, CCNP, Electrical and Mechanical Engineering. On the finance side, staff are completing qualifications from the Association of Accounting Technicians and ACCA. Since joining the company, I have been given the opportunity to study electrical engineering and will attain my qualification next year and hope to go on and do more courses to keep up to date with everything.

Practical training coupled with the right aptitude and enthusiasm is essential in being able to cope with tasks outside of one’s comfort zone to deal with client demands. The knowledge base and skillset that was required a decade ago is expected to be supplemented with additional training in keeping up with what’s happening in the industry. Although there is still a long way to go, strong in-roads are being made by the government and employers in the industry – watch this space!

Jesse South – Data Centre Supervisor at LDeX

Tags:  connectivity  Datacentre  Date Centre  disaster recovery  infrastructure  LDeX Group  Location  Skills Development 

Share |
PermalinkComments (0)
 

Call for Papers- Deadline approaching for the DCA Programme Track at Data Centre EXPO

Posted By Louise Fairley, 20 May 2014
Call for Papers- Deadline approaching for the DCA Programme Track at Data Centre EXPO

The DCA is hosting its own virtual track program at Data Centre EXPO and this 'Call for Papers' is aimed at DCA members to get involved and work with us. 

The deadline is fast approaching and all interest should be submitted here by Thursday the 29th May 2014.

Asking for thought leadership content on the industry we will be looking for DCA representation as to why you're involved in the DCA, why it's important to have the association, the industry challenges that need addressing and how the DCA and its members are working together to address them.

The PEDCA Consortium will act as conference committee and will review and confirm selected talks on Friday 30th May.

To submit your application to speak please apply here by Thursday the 29th May 2014.

Thanks for your support

Tags:  Call for papers  datacentre Expo  Date Centre  speaking opportunities 

Share |
PermalinkComments (0)
 

DCA Anti-Contamination Workshop Reminder

Posted By Kelly Edmond, 15 April 2014
Here's a reminder to all who may be interested in attending the DCA Anti-Contamination Workshop on Thursday 24th April 2014.

This is the annual review meeting for this specialist steering group. The group's aim is to discuss, advise and recommend practical solutions to the members of the Data Centre Alliance on the control of dust, dirt and contamination of the data centre. In particular, preventing damage to equipment; loss of data and conservation of energy. This workshop is to gain your views on what collaborative action(s) are required by the DCA. All members are welcome to contribute.

All details of the workshop are in the Event Calendar. Don't forget to RSVP! 



Tags:  air management  anti contamination  data centre cleaning  Date Centre  docklands  university of east london 

Share |
PermalinkComments (0)
 

Access to Data Centres: more than just an afterthought in times of crisis

Posted By Anne-Marie Lavelle, London Data Exchange, 27 March 2014
Updated: 27 March 2014

Access to Data Centres: more than just an afterthought in times of crisis


We are all hoping that the severe floods experienced recently in parts of the UK are now behind us. However damaging they may have been to the public and businesses alike, they should have acted as a wake up call for business planners & those responsible for running critical infrastructure to revisit their approach to business continuity and disaster recovery planning

As media reports have highlighted, the importance of the right disaster recovery plans in the advent of a natural disaster is even more crucial with the ever changing unpredictable weather patterns we now seem to be faced with .

The recent floods should have also prompted a further consideration for business planners, namely as to where to their locate their data centre resource, and how accessible it will be during a natural disaster.

 

The floods were especially severe along large swathes of the Thames Valley region, including Slough and other locations along the M4 corridor, which are popular data centre locations. Staines and Egham for example were particularly hard hit because of their proximity to the River Thames, and were at times completely inaccessible.

Physical access to data centres is therefore a vital element that can often be overlooked or ignored by business planners. Because if you cannot access your data centre physically, then how can engineers access the data centre to ensure it remains operational or repair faults, or even to top up the diesel for the backup power generators? Diesel generators can only operate as long as they have the fuel to run them. If a server running mission critical applications for your business went down due to a fault, what would happen if one your engineers tried to visit the Data Centre to replace a faulty machine and they simply could not get to the facility because of road closures due to flooding making access impossible?

This last point was starkly illustrated when Hurricane Sandy hit New York in late 2012. When it hit the ‘Big Apple’, a lot of websites were knocked offline as local data centres became flooded or lost power. One data centre (Lower Manhattan’s Peer 1) managed to remain operational because physical access was still possible. This allowed volunteers to carry diesel fuel in buckets after flooding had shorted out a basement fuel pump.

Thus physical access is a vitally important consideration for business planners, if data centres are to remain operational in times of crisis. It is not just a question of having the appropriate disaster recovery plans and flood defences.

Business planners who are choosing a data centre location need to consider the following factors. Is the facility on (or near) a flood plain? Are there major roads nearby that are likely to remain open? What are the public transport links like? Not all data centres are created equal in this regard. But some have much better accessibility options than others. Many Data Centre operators will state that their facility is not on a flood plain and this maybe the case, but what business planners should look more carefully into is the proximity of the facility to areas that may be prone to flooding that could impact accessibility to the Data Centre they are hosted in.

LDeX for example has a 22,000sq ft carrier neutral colocation facility located at Staples Corner (North West London). The facility is not on a flood plain and it is elevated 65 metres above sea level which is one the highest points in London. Accessibility is guaranteed because it is situated at the convergence of a major road network (the M1 motorway and A406 London Ring Road) and in close proximity to the A40, M40, M25 & M4. It also well within the boundaries of the M25, meaning that public transport links are also plentiful.

Therefore business planners need to reassess their disaster recovery plans following the recent floods, and factor in some new considerations. They need to factor in the physical accessibility of their chosen data centre in any future plans.

Data Centre operators whether corporate or multi-tenanted should also be considering these issues when planning investment and doing feasibility on their next Data Centre build

After all, it is worth remembering that yes, network connectivity is very important, power always on paramount, but physical connectivity is equally crucial and should not be ignored.

Arpen Tucker, Group Commercial and Strategic Director at LDeX Group

 

Tags:  Datacentre  Date Centre  disaster recovery  Flooding  LDeX Group  London 

Share |
PermalinkComments (0)
 

A guide to data centre metrics and standards for start-ups and SMBs

Posted By Anne-Marie Lavelle, London Data Exchange, 27 March 2014
Updated: 27 March 2014

A guide to data centre metrics and standards for start-ups and SMBs

Having made that choice to co-locate your organisation’s servers and infrastructure to a trusted data centre provider, companies need to be able to understand the key metrics and standards which they should use to evaluate and benchmark each data centre operator against. With so many terms to get to grips with and understand, we felt it necessary to address the most prevalent ones for data centres.

Green Grid has developed a series of metrics to encourage greater energy efficiency within the data centre. Here are the top seven which we think you’ll find most useful.

PUE: The most common metric used to show how efficiently data centres are using their energy would have to be Power Usage Effectiveness. Essentially, it’s a ratio of how much energy consumption is it going to take to run a data centre’s IT and servers. This would incorporate things like UPS systems, cooling systems, chillers, HVAC for the computer room, air handlers and data centre lighting for instance vs how much energy is going to run the overall data centre which would be taking into account monitor usage, workstations, switches, the list goes on.

Ideally a data centre’s PUE would be 1.0, which means 100% of energy is used by the computing devices in the data centre – and not on things like lighting, cooling or workstations. LDeX for instance uses below 1.35 which means that for every watt of energy used by the servers, .30 of a watt is being used for data centre cooling and lighting making very little of its energy being used for cooling and power conversion.

CUE: Carbon Usage Effectiveness also developed by The Green Grid complements PUE and looks at the carbon emissions associated with operating a data centre. To understand it better you look at the total carbon emissions due to the energy consumption of the data centre and divide it by the energy consumption of the data centre’s servers and IT equipment. The metric is expressed in kilograms of carbon dioxide (kgCO2eq) per kilowatt-hour (kWh), and if a data centre is 100-percent powered by clean energy, it will have a CUE of zero. It provides a great way in determining ways to improve a data centre’s sustainability, how data centre operators are improving designs and processes over time. LDeX is run on 100% renewable electricity from Scottish Power.

WUE: Water Usage Effectiveness simply calculates how well data centres are using within its facilities. The WUE is a ratio of the annual water usage to how much energy is being consumed by the IT equipment and servers, and is expressed in litres/kilowatt-hour (L/kWh). Like CUE, the ideal value of WUE is zero, for no water was used to operate the data centre. LDeX does not operate chilled water cooling meaning that we do not use water to run our data centre facility.

Power SLAs: Service Level Agreements is the compensation offered in the unlikely event that power provided by the data centre operator to a client as part of an agreement is lost and service is interrupted affecting your company’s business. The last thing your business wants is to have people being unable to access your company’s website and if power gets cut out from your rack for some reason, make sure you have measures in place.

Data centres refer to the Uptime Institute for guidance with regards to meeting standards for any downtime. The difference between 99.671%, 99.741%, 99.982%, and 99.995%, while seemingly nominal, could be significant depending on the application. Whilst no down-time is ideal, the tier system allows the below durations for services to be unavailable within one year (525,600 minutes):

  • Tier 1 (99.671%) status would allow 1729.224 minutes
  • Tier 2 (99.741%) status would allow 1361.304 minutes
  • Tier 3 (99.982%) status would allow 94.608 minutes
  • Tier 4 (99.995%) status would allow 26.28 minutes

LDeX has infrastructure resilience rated at Tier 3 status offering customers peace of mind that in the unlikely event of an outage–and therefore protecting your business. We like to operate closed control cooling systems in our facilities enabling us to operate tight environmental parameter SLA’s. LDeX operate SLA Cold Aisle Temperature parameters at 23degreesC +/- 3 degreesC and RH (Relative Humidity) 35% – 60%.

Some data centres run fresh air cooling systems which make it hard to regulate RH and quite often their RH parameters are 205 – 80% and beyond. This can lead to increased humidity in the data hall and has on occasion resulted in rust on server components or a low RH can produce static electricity within the data hall. Make sure you look into this and ask about it.

Understand the ISO standards that matter to your business

ISO 50001 – Energy management

Using energy efficiently helps organisations save money as well as helping to conserve resources and tackle climate change. ISO 50001 supports organisations in all sectors to use energy more efficiently, through the development of an energy management system (EnMS).

ISO 50001:2011 provides a framework of requirements for organizations to:

  • Develop a policy for more efficient use of energy
  • Fix targets and objectives to meet the policy
  • Use data to better understand and make decisions about energy use
  • Measure the results
  • Review how well the policy works, and
  • Continually improve energy management

ISO 27001 – Information Security Management

Keeping your company’s intellectual property should be a top priority for your business and ensuring that your data centre provider offers this sort of resilience is imperative. The ISO 27000 family of standards helps organizations keep information assets secure.

Using this will help your organization manage the security of assets such as financial information, intellectual property, employee details or information entrusted to you by third parties.

ISO/IEC 27001 is the best-known standard in the family providing requirements for an information security management system (ISMS). An ISMS is a systematic approach to managing sensitive company information so that it remains secure. It includes people, processes and IT systems by applying a risk management process.

It can help small, medium and large businesses in any sector keep information assets secure.

Like other ISO management system standards, certification to ISO/IEC 27001 is possible but not obligatory. Some organizations choose to implement the standard in order to benefit from the best practice it contains while others decide they also want to get certified to reassure customers and clients that its recommendations have been followed. ISO does not perform certification.

PCI DSS – Banks and businesses alike conduct a lot of transactions over the internet. With this in mind, the PCI Security Standards Council (SSC) developed a set of international security standards to ensure that service providers and merchants have payment protection whether from a debit, credit or company purchasing card. As of 1st January 2015, PCI DSS 2.0 will become mandatory. This will be broken down into 12 requirements ranging from vulnerability assessments to encrypting data. Make sure to ask if your data centre operator has this standard.

With the increased stakeholder scrutiny that has been placed on data centres, steps need to be put in place to make sure that the data centre operator that you are looking at choosing is aligning its strategy not only to some of these metrics and standards mention, but to other security, environmental governmental regulations that have been brought in.

Working for a successful data centre and network services provider like LDeX has enabled me as a relative newbie to the data centre industry, to get to grips with these terms to facilitate client understanding regarding where LDeX sits in comparison with our competitors.

Anne-Marie Lavelle, Group Marketing Executive at LDeX Group

Tags:  connectivity  Cooling  CUE  data centre  Datacentre  Date Centre  efficiency  ISO standards  operational best practice  PCI DSS  PUE  WUE 

Share |
PermalinkComments (0)
 

The LDeX guide to streaming satellite feeds over IP

Posted By Anne-Marie Lavelle, London Data Exchange, 27 March 2014
Updated: 27 March 2014

The LDeX guide to streaming satellite feeds over IP


Since launching our first ever data centre company UK Grid back in 2004, the management team at LDeX has been involved with colocation clients that stream satellite feeds over IP. Although the Manchester based company was acquired in 2011, the management team went on to create LDeX, bringing with them a clear vision of wanting to maintain and enhance the relationships formed with media clients looking to stream their services through the usage of our facilities and low latency connectivity.

In the early days, the system was completely based on coax cabling and daisy chained multi switches which were quite difficult to manage and scale. Since then, the team at LDeX1, which is our new Data Centre facility in London, has developed a streaming platform that addresses the early pitfalls. The platform is working very well and we are attracting a global client base into LDeX due to our experience and knowledge of this infrastructure. I’m going to discuss below the elements which make our platform unique and attractive to the streaming world.

Optical Distribution for scale

Our Satellite system uses optical LNBs – this means the LNB is connected into our satellite distribution racks via a simplex fibre optic patch. The LNB takes the four polarities and stacks them together and then converts them to light, rather like a network trunk carrying multiple VLANs. The fibre patch is ruggedised as part of it is exposed to the outside elements. The fibre patch then connects into an optical splitter, which can split the light up to 32 times without amplification. From this point, the light from the splitter via another fibre patch is converted into electrical current using a Quatro convertor.

On the output side of the convertor, we can split the signal again and from this point, we use coax to connect into 32 way multi switches. We can then hand off feeds to our clients via coax cable that runs through our cable management system. Each individual coax feed is stacked with the entire frequency of the given satellite. The system is highly scalable and uses a single LNB, which enables us to deliver 2048 satellite feeds to clients. More than this could be possible – I’ll let you know when we get there.

All of this means we can hand off a single Coax to a client, we can hand off the four bands via 4 coax feeds and they can put their own multi switches in or we can simply hand off an optical feed and they can build their own distribution system. There is also another option for a client to site their own satellite dish within our compound on our gantry systems. We can go as far as offering N plus 1 satellites with two satellite dishes sited in different locations connecting into a common multi switch.

Low Latency Bandwidth for TV

Once our clients have converted their feeds into IP, they need a low latency multi-homed network to get the content to the users. We provide this on our network platform by peering with the largest IP networks in the world and also by peering on the largest Internet Exchange Points in Europe. This means we can connect our clients directly with the major ISPs globally and remove network hops and latency giving a consistent and fast direct connection. The network needs to be resilient and we have built in dual core network architecture especially for streaming with two major network POPs in LDeX1 and Telehouse North giving the capability to burst up to 10 Gbps.

Hand holding

Our support team here at LDeX1 has extensive knowledge in the physical infrastructure required to deliver satellite feeds and we also have the testing equipment on site to troubleshoot any issues with signal strength. We can deliver a satellite feed to a rack in LDeX1 on the same day as requested. The majority of our overseas clients ship equipment to us and we install it, configure it and cable for them using our in house team.

We also know that some of our clients don’t necessarily understand IP networks, BGP, routing etc – they don’t need to. Our experienced team of network engineers do understand this and we can provide the full network wrap around for any clients, our network team is qualified and highly experienced in Cisco and Juniper from CCNA up to CCIE.

LDeX is also developing a media eco system within LDeX1, which we have named ‘the London Media Platform’. Our clients are making full use of the platform already by doing business with each other. This eco system enables our clients to offer additional services to each other within the confines of LDeX.

To give an example of how it works, we have a media client in LDeX1 that requires channels from a satellite which can’t be picked up in the UK. This client has been able to liaise with another one our partners within our platform, which operates in the geographical area of the required satellite footprint. They are streaming this Satellite back into LDeX and onto the media client who then streams the channels to his end users.

So in summary, our streaming platform is comprised of three core elements.

1. A scalable distribution system
2. A scalable and resilient low latency network
3. A highly skilled team of Data Centre technicians and Network engineers.

Hopefully you can see how LDeX is rapidly becoming the media streaming platform of choice for streaming customers – we are certainly cooking up a stream!

Patrick Doyle, Operations Director and Co-owner at LDeX Group

 Attached Thumbnails:

Tags:  connectivity  Date Centre  IP  latency  optical distribution for scale  satellite systems 

Share |
PermalinkComments (1)
 

Senior Management Team key to datacentre success

Posted By Anne-Marie Lavelle, London Data Exchange, 27 March 2014

Senior Management Team key to data centre success 

I am often asked what the key ingredients to running a successful company are. Particularly one which operates world class carrier neutral data centres in the UK. My answer is simple – having a well-oiled Senior Management Team (SMT).

Having seen data centre operators come and go over the last two decades, those that stick around or have been acquired are generally those that have a solid SMT-a leadership function that Winston Churchill would be proud of.

Since we hit the financial crisis of 2008, increasing pressure has been put on our FD’s and CFO’s to ensure that supply chains are as robust on the inside as they appear on the outside. Credit scoring and financial due diligence is becoming par for the course and general procurement pessimism is at its peak.

This “procurement pessimism” is felt throughout the industry with protracted sales cycles and longer than ever lead times on contract completions following months of contract negotiations on “what if…” scenarios and clauses. Everyone is cautious  not to expose their company to potential unnecessary liability and  prudent not to put their job at risk…knowing oh too well such things like jobs are like rocking horse manure in times of austerity.

This is why it is imperative at this crucial time to ensure you have your ducks in a row. A quality SMT comprised of professional, dedicated and highly experienced individuals is essential in order to  ensure that  all aspects of a well-oiled machine; remain well-oiled and regularly serviced.  The highly geared asset rich nature of a data centre operator requires a keen balance between cash flow and debt.  One doesn’t want to starve the business of cash to allow for the next phase of construction to begin, however one also doesn’t want to overly expose the company to mountains of debt that is unserviceable. One also needs to keep a keen eye on the company’s order books to ensure whichever route the CFO decides to take to fund the next phase, there is sufficient technical space available to furnish the new orders, on time.  This is a prime example where the COO and CFO need to work in harmony.

Finally, a well-oiled SMT needs to know how to delegate.  There isn’t much point in being well-oiled, if the SMT are the ones applying the oil, and for this reason, delegation is another key skill which is  crucial to a successful SMT and the successful operation of a data centre.  Having the ability to communicate the key objectives as well as being able to articulate the  company’s ethos and message statement down the chain is an important attribute to have.  Your staff will look to you for leadership and how the SMT is portrayed in their day to day roles will rub off on them.  Remember, whatever message you are putting out to your team will eventually filter all the way down to your customers…so make it a good one!

It takes years to gain the relevant experience within an SMT to orchestrate this fine balancing act required within a data centre operator.  Procurement managers and CFO’s will no doubt look to the balance sheet to give them comfort that their preferred supplier of data centre services will be here not just today, but tomorrow as well.  Secondly, however, if not at the same time, the SMT will be interrogated on experience and their ability to gel well together as a leadership function and share the same company direction and ethos and its ultimate goals.  They will look to the SMT for a team of sensible, responsible individuals who have the company’s best interests at heart and who can steer the ship through bumpy waters and come out of the other side relatively unscathed.

What does all this mean - well get it right and it could mean more golf, which is good for all of us. Get it wrong and the consequences could be disastrous.

Rob Garbutt, CEO at LDeX Group

Tags:  Datacentre  Date Centre  LDeX Group  London 

Share |
PermalinkComments (0)
 

A Day in the Life of a Data Centre Supervisor

Posted By Anne-Marie Lavelle, London Data Exchange, 27 March 2014
Updated: 27 March 2014

A Day in the Life of a Data Centre Supervisor 

If you’re reading this blog, you are probably somebody who follows the data centre industry closely and are use to reading about the data centre trends and developments. With all this in mind, I thought I would give you a high level insight of a day in a life of a data centre supervisor with some of our best practices that we follow in our facility.

I am responsible for the day-to-day operations at our North West London facility that include, managing the team of data centre technicians, supporting our clients, maintaining the high operational standards, as well as following our PPM plan.

As the data centre supervisor I have a number of tasks every morning as well as a checklist that our on-site data centre technicians would carry out. I have listed the 4 key points from my checklist below:

1. Data Centre Security

When my day starts at 07:00 I have to make sure all of our security cameras are running effectively as well as checking the security night report, this can be obtained by on site security or from our third party monitoring centre.

2. Data Centre Cooling

Also included in my morning checklist is to check the facilities temperature and humidity graphs that are retrieved from a number of probes placed within the data halls. I have to make sure we stay within our data centre service level agreement (SLA) and this is carried out by a number of alarms in place that will alert us 24 x 7 of any fluctuations.

3. Data Centre Operations

As you know the data centre needs 100% uptime. Firstly I would check our on-site power generation and then move onto our three string UPS system to check for any unusual activity.

4. Data Centre Cleanliness

I believe a clean data centre is a happy data centre. My team of data centre technicians are always making sure the data halls are clean, dust free and well presented.

If you are looking for any more in depth information on the above click here

As a data centre supervisor, I believe everything is your business and this is exactly why I go that extra mile to make sure all our customers are always fully supported and comfortable with every aspect of our service.

Jesse South - LDeX Group Data Centre Supervisor 

Tags:  data centre cleaning  Date Centre  London  operational best practice 

Share |
PermalinkComments (0)
 

Osborne Gives Data Centres A Green Bonus

Posted By Arpen Tucker, 09 December 2013

Osborne Gives Data Centres A Green Bonus

The data centre industry gets green tax relief in George Osborne’s Autumn Statement

The Chancellor George Osborne has granted a concession to Britain’s data centre industry by allowing it to step outside other green taxes and form its own agreement to reduce emissions, while still remaining competitive against the industry in other nations.

The climate change agreement (CCA) will reduce taxes on the sector, in exchange for meeting agree efficiency targets. The idea is that high taxes would have made it harder for Britain’s data centres to compete against foreign rivals. The British tech industry body, techUK, welcomed the decision, having fought for a CCA for the industry for some time.

CRC-Energy-Efficiency-SchemeData centres want tax love

"We are delighted that the Chancellor has recognised the importance of technology with the introduction of a climate change agreement (CCA) for data centres,” said Emma Fryer, associate director of climate change programmes at techUK. "This comes after our sustained campaign to ask government to apply a more intelligent approach to improving energy efficiency without penalising growth in this important sector.”

Climate Change Agreements are designed for industry sectors, such as manufacturing, which use large amounts of energy and are subject to competition from abroad. Applying taxes would make them uncompetitive, Fryer has argued, and have no impact on emissions. Instead of making British data centres more efficient, the higher costs would drive business abroad where it would probably be equally inefficient.

Under the CCA, the sector agrees efficiency targets, at levels where the cost won’t make the businesses uncompetitive. There are around 50 CCAs in operation, but the data centre industry’s is the first CCA in a sector which does not manufacture physical objects, techUK pointed out today. The argument took a long time, even though to techUK it has been obvious that data centres face even greater competition since "data is the most mobile commodity on earth”.

More attractive location

In answer to fears that the CCA might be a soft option compared to green taxes, techUK’s statement claims that the targets are more beneficial to the environment: "These targets are sector-specific so they can be focused exactly where they can deliver the most benefit,” said techUK, adding that CCAs have delivered greater energy savings than conventional policy measures. 

"Without a CCA we would have seen the UK become an increasingly unattractive location to operate data centres,” said Fryer. "The UK would have lost investment, expertise and capability to other countries at significant economic cost.  

The CCA will also drive cloud computing, said Fryer: "A climate change agreement will encourage the right behaviour among data centre operators and drive the market away from a distributed IT model that is less energy efficient towards one in which computing activity is consolidated into efficient, purpose-built facilities. This CCA supports the whole migration to Cloud-based computing. By creating much needed stability and predictability it will also enable data centres and associated businesses to plan their long term investment and growth strategies.”

Data centres are not just fancy sheds full of computing equipment, she said. They enable and power service economies in the way that heavy industry used to power manufacturing economies; they are the agents of growth for the knowledge economy. This is because a single data centre supports multiple layers of economic activity.

Others have raised questions about the long-term benefit of entering a new relationship with government. "I know that some operators find the CRC and Climate Change Levy costs to be a substantial part of their operating margin but we need to consider the downsides as well,” Liam Newcombe of data centre energy efficiency firm Romonet told TechWeekEurope earlier in techUK’s campaign. "If a CCA is granted then data centres will become recognised by government as a sector which will then be measured and reported on. ”

Tags:  Carbon Tax  CRC  Date Centre  energy-efficient computing 

Share |
PermalinkComments (3)
 

Data Centre Anti-Contamination Guide Released

Posted By Administration, 15 April 2013

DCA Anti-Contamination Guideline Released.

Following DCA Member's collaboration, we have released a guideline paper designed to assist all owners and operators of data centres large and small in guarding against outages and maintaining energy efficiency by tackling the problems caused by contaminants, dust and dirt in the data centre.

The DCA would like to thank all who helped and assisted in the development of this paper and especially:

Dr Ian bitterlin, Gary Hall, Dr Jon Summers, David McLenachan, Spencer North and Alan Fisher.

The guideline is free and publically available from HERE

Tags:  anti contamination  data centre cleaning  Date Centre 

Share |
PermalinkComments (0)
 

Component failure Vs human error - which is more likely?

Posted By Steve Hone, 01 August 2012

I recall some time ago Gartner stating that 90% of all data centre outages can be attributed to some form of human error, we all get fixated withresilience when it comes to the physical building blocks of a data centre but surely equal consideration needs to be invested inoperational processes, procedures and trainingas well if Gartner's findings are to be believed?

Tags:  central  data  Date Centre  operational best practice 

Share |
PermalinkComments (1)
 

Reduce the risk of dust and dirt entering your Data Centre

Posted By Steve Hone, 01 August 2012

Studies have shown that 75% of Hardware failures are caused by dirt and dust. 80% of dust and dirt entering your critical area does so on soles of feet (Source: 3M).

Dust can cause irreversible harm to the data centre and its systems and introduce a major risk of fire hazard. Data centres need an effective strategy to prevent dust and particle contamination.

The traditional approach to reduce this contamination risk for data centre operators has been to use carpeted mats, but tests have shown this is ineffective in reducing dirt and dust being transferred into your IT room. Alternatives such as sticky/tacky mats which although effective initially have a limited life and can very quickly look unsightly therefore need to be replaced manually with regular checks to remain effective.

A number of data centre operators I have spoken to have nowinstalled a possiblesolution to this issue which prevents over 99% of dust and dirt entering their data halls which looks great and isguaranteed for 3 years.

Polymeric flooring material has traditionally been used in the pharmaceutical and microchip industry where a clean room environment is essential for production/quality control. However the substrate is ideal for today's data centre environment.

To find out more suggest visiting http://www.colofinder.co.uk/dc-anti-contamination.php

Tags:  anti contamination  central  data  data centre cleaning  Date Centre  dirt  dust 

Share |
PermalinkComments (0)
 
Page 1 of 2
1  |  2
Sign In
Sign In securely
News

Data Centre Alliance

Privacy Policy