Print Page   |   Contact Us   |   Report Abuse   |   Sign In   |   Join the DCA
DCA Member Search
Event Calendar

07/06/2017 » 28/12/2017
Training & Education Courses in 2017

04/10/2017 » 05/10/2017
IP EXPO europe

11/10/2017 » 12/10/2017
Data Centre World Singapore

22/11/2017 » 23/11/2017
EMEX 2017

Top Contributors

DCA Member's Blog
Blog Home All Blogs
Please post data centre industry comments, experiences, ideas, questions, queries and goings-on here - to stay informed with updates, ensure you are subscribed by clicking the "subscribe" button.

 

Search all posts for:   

 

Top tags: Date Centre  Datacentre  efficiency  EMKA UK  central  Cooling  data  data centre  London  cloud  data centre security  Location  pue  swinghandles  connectivity  EMKA  energy-efficient computing  LDeX Group  air management  anti contamination  BioLock  data centre cleaning  data centre security;  disaster recovery  EU CODE of Conduct  infrastructure  planning  power  university of east london  building 

Disaster Recovery Q & A with Patrick Doyle, Director at LDeX

Posted By Anne-Marie Lavelle, LDeX Group, 08 July 2014

Disaster Recovery Q & A with Patrick Doyle, Director at LDeX

Patrick Doyle, COO at London based colocation and network connectivity provider LDeX, shares his thoughts about why organisations should opt for colocation over keeping their disaster recovery in-house for business continuity purposes.

First of all, what do you see as the main reasons for organisations opting for colocation over having an in-house facility which caters for its disaster recovery needs?

There are many reasons for choosing colocation over keeping disaster recovery in-house:

1)    Scalability

Businesses opt for colocation based services due to the scalability benefits of having a datacentre facility which caters to the client’s expanding customer base. Datacentres have extensive capacity to be able to cope with large increases in the number of servers, infrastructure and systems which need to be installed, connected and maintained at a moment’s notice. This is something which is not always feasible in an in-house facility.

2)    Stronger bargaining power to negotiate energy pricing

Datacentres have stronger bargaining power to negotiate prices on energy pricing for power, cooling and critical load protection. LDeX for instance has recently fixed prices for a three year period so that customers are able to obtain colocation without the heavy price tag which would usually go hand in hand with data storage.

3)    Security and 24×7 support

Having your systems stored on site poses a tremendous risk to the organisation in that if the systems go down or there is a fire in the building, the business may be offline for a substantial period of time which would lead to decreasing revenues and profit margins. By storing your systems offsite in a location which is away from a flight path or a flood plain which is safe, robust and secure with perimeter control, companies can be assured that there will be 100% uptime and that there are datacentre engineers and security measures such as facial recognition monitors and mantraps to ensure your company’s online presence is protected at all times. LDeX owns its own datacentre facility which is situated on Staples Corner, beside the M1, giving customers access to the site without being in the centre of London.

4)    On hand expertise

Datacentre specialists are on hand 24 x 7 to provide support when needed within a timely period so that in the unlikely event that something does happen, it is being dealt with by engineering experts. At LDeX, the data centre, network engineering and senior management team are on hand on a 24x7x365 basis giving clients reassurance that their business is our priority.

These services are critical and clients like to have that extra support to have advice about how to mitigate against potential network attacks such as DDoS attacks. Having that specialist expertise at the right moment is priceless when you compare it to how much your company would lose if something happened to your business which affected your revenue stream.

5)    Full control over your systems

Colocation offers the business a way of installing its equipment into a resilient datacentre with the ability to have full control over its IT systems. It means that the business can focus on what it does best while the datacentre provider looks after the backend and its business continuity.

6)    Minimal risk of construction charges

Finally, locating your infrastructure within a carrier neutral facility such as LDeX means that the business has an array of network providers to choose from which minimises the risk of excess construction charges.

With over 20 years’ experience in the industry, what do you see as the specific considerations for choosing a colocation provider for disaster recovery?

Access to the support team

Having a support team which has the experience and knowledge to cope with any situation which may arise in a calm and reasonable way is paramount in being able to deliver on tight SLAs. When disaster recovery is required, the onsite data centre support team should be able to react on behalf of the client in a promptly manner. Clients often appreciate that the advice that we offer on how this problem should be dealt with and give updates on what work is being done as and when it happens.

As mentioned, datacentre operators have rigorous physical and network support using a multi-tiered approach in order to mitigate any breaches which might arise otherwise on an onsite location. Datacentre facilities are not only monitored on a 24×7 basis via an offsite security company, but security staff on hand would have set procedures to follow to alert police automatically. Facial recognition monitors, tags, mantraps and laser trip wires are just some of the additional measures which are in place acting as a deterrent to intruders.

Proximity to the client’s site

Clients often have different preferences and requirements with regards to how close they would like to be in proximity to the datacentre which they use for their business services. Some clients prefer to be a reasonable distance away from the client’s site in order to provide protection from a major disaster. You also get clients who like to be in proximity to the colocation facility for ease of access.

Are there any hidden costs associated with colocation that users need to be aware of?

The norm would be that all colocation related costs which arise post contract are usually well documented. From time to time, there are usually some services which arise on an ad hoc basis such as remote hands or scheduled engineering, copper and fibre cabling work which needs to be availed of when the client cannot get to the facility within a specific time period. Although this is charged on top of the standard service, the client is always given options in these instances.

Is there a certain point where enterprises are basically too big to use a colocation provider for disaster recovery?

At the moment, there is a lot of consolidation taking place in the market place where large global companies are consolidating their regional disaster recovery from multiple data centres into one large datacentre. LDeX has recently signed contracts with a two US based clients which are currently doing this.

Do colocation providers offer visibility into systems remotely? Or is that just something you have to set up on your own, or assign staff to work at the remote location?

Datacentre providers typically offer visibility into power utilisation per power feed so a client can see how much power they are using at any given time. The clients IT systems are usually managed by the client which is one of the main unique selling points of colocation.

Patrick Doyle, COO at LDeX Group

Tags:  connectivity  Datacentre  Date Centre  disaster recovery  LDeX Group  London 

Share |
Permalink
 

Multi skilling in the Data Centre Industry

Posted By Anne-Marie Lavelle, LDeX Group, 12 June 2014

 Multi skilling in the Data Centre Industry

The ‘skills gap’ has unfortunately become synonymous with the technology industry with 2.5 million IT jobs expected to go unfulfilled by 2020 according to a recent infographic from IT recruitment firm SmartSource. For clients dependent on having low latency resilient connectivity to their servers, infrastructure and systems at all times, not to mention reliable data centre and network support, this is a particularly worrying fact.

Staff need to be able to have the adequate skills, knowledge and expertise in order to deal with any client query that may arise and be able to cope and resolve problems promptly in the unlikely event of a power outage. As 75% of data centre downtime is caused by some sort of human error, I felt it necessary to put this blog together.

Lack of experience

Many technicians who I see looking to work in the data centre environment don’t possess the necessary experience with servers, electrical and operating systems that one would expect or else they are not familiar with the different types of equipment which needs to be installed and worked on. Frankly, this is not acceptable making the search for high calibre engineers all the more difficult.

The necessity to up-skill and retrain

In the current climate, there is a reliance on existing staff to up-skill and attend additional courses in line with the latest developments. Technology has moved on at a rapid pace outpacing many of those working in the industry signalling the need to develop existing skill sets. Whether it is a short course in data centre design or an MA in a specialised area such as wireless networking, it is paramount to keep up to date in order to progress to a management position in the data centre.  I find that by staff going on to attend and pass these courses showcases a strong aptitude and enthusiasm to their employer for self-development and career progression. What is promising to see is that 57% of IT industry firms intend to train existing staff in order to address these requirements.

Combining technical aptitude with relevant experience

Another positive development is that the UK government has taken light of the situation by allocating much needed investment to programmes and courses focused on Science, Technology, Engineering and Maths (STEM) in recent years giving students the necessary knowledge and aptitude to work in a high tech environment. These courses coupled with the increasing number of internship programmes that are regularly offered will make candidates more marketable possessing everything that they need to succeed.

The LDeX viewpoint

At LDeX Group, the team is regularly trained to cope with the increasing demands that are expected of the company. On the data centre side, technicians would either have trained or be completing technical courses such as CompTia, MTA, CCNA, CCNP, Electrical and Mechanical Engineering. On the finance side, staff are completing qualifications from the Association of Accounting Technicians and ACCA. Since joining the company, I have been given the opportunity to study electrical engineering and will attain my qualification next year and hope to go on and do more courses to keep up to date with everything.

Practical training coupled with the right aptitude and enthusiasm is essential in being able to cope with tasks outside of one’s comfort zone to deal with client demands. The knowledge base and skillset that was required a decade ago is expected to be supplemented with additional training in keeping up with what’s happening in the industry. Although there is still a long way to go, strong in-roads are being made by the government and employers in the industry – watch this space!

Jesse South – Data Centre Supervisor at LDeX

Tags:  connectivity  Datacentre  Date Centre  disaster recovery  infrastructure  LDeX Group  Location  Skills Development 

Share |
PermalinkComments (0)
 

It’s all about the network – data centre connectivity

Posted By Tanya Passi, Geo Networks, 30 April 2014
Data centres cannot operate in isolation, so it is surprising that many operators continue to invest so much time and effort in the physical building, with little thought about the connectivity requirements of their target customers. To maximise investment, connectivity must go hand in hand with the infrastructure build, rather than being bolted on as an afterthought.

As demand for data centre space continues to grow rapidly, spurred by the increasing move to cloud computing and big data initiatives, huge levels of investment are being poured into this sector. New facilities are springing up in the capital and beyond, as enterprise customers realise that they can store less critical information slightly further afield. Significant focus is placed on the construction and interior of data centres to ensure maximum levels of security, power and operational efficiency, but what about connectivity? Whilst SMEs may be satisfied with only one carrier at a data centre site, large enterprise customers are looking for diversity.

An open access connectivity model can enhance the data centre operator’s proposition by offering its customers a choice of telecom service providers. Open access involves deploying fibre to the data centre and making the underlying infrastructure available to any other network operator or end user so that they can connect directly from the site to the network, or networks, of their choice. This approach is a long term investment rather than a short term remedy, requiring data centre owners and network operators to work in harmony, offering control to the site owners and choice to the end user.

Well-connected data centres present an attractive proposition to end users because as well as choice, they ensure a strong level of competition between different network providers, guaranteeing that connectivity is competitively priced. For operators, there is no need to make multiple investments with different operators, paying each to build a bespoke link to the site, when it’s possible to contract with a single provider and achieve the same goal. It ensures optimum delivery of fibre to the site, so that customers can get access to their data everywhere and anywhere once connected, and the best return on investment for owners.

An open access model, if undertaken correctly has the potential to benefit all: the data centre operator, network provider and most crucially the end user. Connectivity can no longer be an added extra. For the most successful, innovative operators it must be a fundamental part of the data centre proposition from the get go.

Tags:  connectivity  data centre  infrastructure 

Share |
PermalinkComments (0)
 

From Fantasy to Reality

Posted By Michelle Martin, Geo Networks, 24 April 2014

Adventures in Subterranean London - a guest blog by Ian Livingstone CBE

In East London overlooking the Olympic Park I am waiting in line for my turn to venture down the dark tunnels of an underground infrastructure that has been in place for over a hundred and fifty years.

As I watch my fellow fortune seekers take their turn to climb slowly down the iron ladder which runs the length of the brick-lined shaft of this eerie Victorian structure, I feel somewhat unprepared for a challenging quest with no sword or shield or even potions of strength or healing to protect me. I am convinced a horde of Zombies is lying in wait in the murky depths below.

As I approach the ladder I take a couple of deep breaths of fresh air, unsure of what the air will be like in the subterranean tunnels. I descend slowly and inhale my first lungful. It’s not a bad as I had imagined. I reach the bottom of the ladder where it disappears into slow-moving brownish water. The air is dank and mildly unpleasant, but at least there are no Zombies. I step down into the foul water, thankful for my long Wellington Boots, feeling them sink into the silt-covered floor. For a moment I worry my boots are not high enough to keep the dark liquid at bay.

My eyes soon adjust to the semi-darkness. Peering into the gloom, I imagine a hideous creature from my own Deathtrap Dungeon to leap out of the shadows. Around the corner in a narrow section of the tunnel, I half expect a Bloodbeast to rise up from the foetid waters to attack me with its stinging tail.

Luckily for me, nothing that dramatic happens on my tour of the Victorian sewers. My footwear was leak-proof and there were no man-eating creatures stalking the tunnels.

I began this adventure as a result of speaking at a conference on the subject of broadband. I referred to the genius and forethought of Sir Joseph Bazalgette who in the 1860s ignored all his critics when building London’s sewers. He insisted on making the sewers six times bigger than required for the anticipated demand. I likened his visionary decision about sewage pipes to those that must be made today about building super high-speed broadband pipes for the digital world to ensure there is enough future capacity for the exponential growth in data consumption.

What I didn’t realise when I was giving my talk was that there was one organisation coincidentally using Bazalgette’s 150 year-old Victorian infrastructure to house its own future-proofed fibre optic network. After my talk I was invited by Geo Networks to see for myself how these two impressive feats combined; one in construction engineering and the other in high technology.

I’ve been privileged to work in the games industry for nearly 40 years. I co-founded Games Workshop in 1975 with an old school friend Steve Jackson with whom I also co-created Fighting Fantasy gamebooks in the 1980s. I made the leap into video games in the early 1990s, and as Chairman of Eidos launched Tomb Raider in 1995. These days my business interests lie in helping new digital games developers such as Playdemic and Midoki to become the world’s best games makers. The video games industry is big business. Annual global revenues from software are $50 billion and forecasted to rise to $90 billion by 2016. With console games, PC games and mobile games, there is something for everybody to play these days, both for men and women, and young and old. Games have become part of mainstream culture and are socially, culturally and economically important as music and film.

The games industry is in constant transition due to constant changes in technology. Moving away from boxed products sold at retail to digital services online, one thing in common that most new games have today is the need for big bandwidth.

The cloud computing phenomenon that is taking place right now is certainly helping the growth and profitability of the video games industry. Super high-speed broadband is a must-have requirement. Cloud Gaming is the streaming of game play whereby players download a small client to gain access to the game running on a separate server. Server-based games also help solve piracy issues which were so prevalent in the boxed product era of video games.  With cloud gaming it is vital that bandwidth and latency issues are resolved to allow games to run at optimum speeds to keep players happy. Bandwidth affects the quality of play, and big bandwidth is needed so that people can enjoy the full experience. Also on hand are data compression solutions such as technology developed by Tangentix Ltd which compresses games ~ 3x. Bandwidth and connectivity should be a given so that game developers can concentrate on creating the best gameplay experience rather than worry about latency issues.

After surviving my fascinating tour of London’s sewers, I exited the impressive Victorian infrastructure via the same ladder that I used to begin my quest. As I stepped back out into the sunlight there was no crowd of cheering people awaiting my return to the outside world or the infamous Baron Sukumvit of Fang there to hand me a purse of 20,000 gold pieces for surviving his cruel dungeon. Although, let’s be honest, it was a safe bet that under the watchful eye of the brilliant Thames Water team I was always going make it out alive and well, Bloodbeast or no Bloodbeast! 

Follow Ian on his adventures by following him on Twitter @ian_livingstone

Tags:  connectivity  infrastructure  London  network 

Share |
PermalinkComments (0)
 

A guide to data centre metrics and standards for start-ups and SMBs

Posted By Anne-Marie Lavelle, London Data Exchange, 27 March 2014
Updated: 27 March 2014

A guide to data centre metrics and standards for start-ups and SMBs

Having made that choice to co-locate your organisation’s servers and infrastructure to a trusted data centre provider, companies need to be able to understand the key metrics and standards which they should use to evaluate and benchmark each data centre operator against. With so many terms to get to grips with and understand, we felt it necessary to address the most prevalent ones for data centres.

Green Grid has developed a series of metrics to encourage greater energy efficiency within the data centre. Here are the top seven which we think you’ll find most useful.

PUE: The most common metric used to show how efficiently data centres are using their energy would have to be Power Usage Effectiveness. Essentially, it’s a ratio of how much energy consumption is it going to take to run a data centre’s IT and servers. This would incorporate things like UPS systems, cooling systems, chillers, HVAC for the computer room, air handlers and data centre lighting for instance vs how much energy is going to run the overall data centre which would be taking into account monitor usage, workstations, switches, the list goes on.

Ideally a data centre’s PUE would be 1.0, which means 100% of energy is used by the computing devices in the data centre – and not on things like lighting, cooling or workstations. LDeX for instance uses below 1.35 which means that for every watt of energy used by the servers, .30 of a watt is being used for data centre cooling and lighting making very little of its energy being used for cooling and power conversion.

CUE: Carbon Usage Effectiveness also developed by The Green Grid complements PUE and looks at the carbon emissions associated with operating a data centre. To understand it better you look at the total carbon emissions due to the energy consumption of the data centre and divide it by the energy consumption of the data centre’s servers and IT equipment. The metric is expressed in kilograms of carbon dioxide (kgCO2eq) per kilowatt-hour (kWh), and if a data centre is 100-percent powered by clean energy, it will have a CUE of zero. It provides a great way in determining ways to improve a data centre’s sustainability, how data centre operators are improving designs and processes over time. LDeX is run on 100% renewable electricity from Scottish Power.

WUE: Water Usage Effectiveness simply calculates how well data centres are using within its facilities. The WUE is a ratio of the annual water usage to how much energy is being consumed by the IT equipment and servers, and is expressed in litres/kilowatt-hour (L/kWh). Like CUE, the ideal value of WUE is zero, for no water was used to operate the data centre. LDeX does not operate chilled water cooling meaning that we do not use water to run our data centre facility.

Power SLAs: Service Level Agreements is the compensation offered in the unlikely event that power provided by the data centre operator to a client as part of an agreement is lost and service is interrupted affecting your company’s business. The last thing your business wants is to have people being unable to access your company’s website and if power gets cut out from your rack for some reason, make sure you have measures in place.

Data centres refer to the Uptime Institute for guidance with regards to meeting standards for any downtime. The difference between 99.671%, 99.741%, 99.982%, and 99.995%, while seemingly nominal, could be significant depending on the application. Whilst no down-time is ideal, the tier system allows the below durations for services to be unavailable within one year (525,600 minutes):

  • Tier 1 (99.671%) status would allow 1729.224 minutes
  • Tier 2 (99.741%) status would allow 1361.304 minutes
  • Tier 3 (99.982%) status would allow 94.608 minutes
  • Tier 4 (99.995%) status would allow 26.28 minutes

LDeX has infrastructure resilience rated at Tier 3 status offering customers peace of mind that in the unlikely event of an outage–and therefore protecting your business. We like to operate closed control cooling systems in our facilities enabling us to operate tight environmental parameter SLA’s. LDeX operate SLA Cold Aisle Temperature parameters at 23degreesC +/- 3 degreesC and RH (Relative Humidity) 35% – 60%.

Some data centres run fresh air cooling systems which make it hard to regulate RH and quite often their RH parameters are 205 – 80% and beyond. This can lead to increased humidity in the data hall and has on occasion resulted in rust on server components or a low RH can produce static electricity within the data hall. Make sure you look into this and ask about it.

Understand the ISO standards that matter to your business

ISO 50001 – Energy management

Using energy efficiently helps organisations save money as well as helping to conserve resources and tackle climate change. ISO 50001 supports organisations in all sectors to use energy more efficiently, through the development of an energy management system (EnMS).

ISO 50001:2011 provides a framework of requirements for organizations to:

  • Develop a policy for more efficient use of energy
  • Fix targets and objectives to meet the policy
  • Use data to better understand and make decisions about energy use
  • Measure the results
  • Review how well the policy works, and
  • Continually improve energy management

ISO 27001 – Information Security Management

Keeping your company’s intellectual property should be a top priority for your business and ensuring that your data centre provider offers this sort of resilience is imperative. The ISO 27000 family of standards helps organizations keep information assets secure.

Using this will help your organization manage the security of assets such as financial information, intellectual property, employee details or information entrusted to you by third parties.

ISO/IEC 27001 is the best-known standard in the family providing requirements for an information security management system (ISMS). An ISMS is a systematic approach to managing sensitive company information so that it remains secure. It includes people, processes and IT systems by applying a risk management process.

It can help small, medium and large businesses in any sector keep information assets secure.

Like other ISO management system standards, certification to ISO/IEC 27001 is possible but not obligatory. Some organizations choose to implement the standard in order to benefit from the best practice it contains while others decide they also want to get certified to reassure customers and clients that its recommendations have been followed. ISO does not perform certification.

PCI DSS – Banks and businesses alike conduct a lot of transactions over the internet. With this in mind, the PCI Security Standards Council (SSC) developed a set of international security standards to ensure that service providers and merchants have payment protection whether from a debit, credit or company purchasing card. As of 1st January 2015, PCI DSS 2.0 will become mandatory. This will be broken down into 12 requirements ranging from vulnerability assessments to encrypting data. Make sure to ask if your data centre operator has this standard.

With the increased stakeholder scrutiny that has been placed on data centres, steps need to be put in place to make sure that the data centre operator that you are looking at choosing is aligning its strategy not only to some of these metrics and standards mention, but to other security, environmental governmental regulations that have been brought in.

Working for a successful data centre and network services provider like LDeX has enabled me as a relative newbie to the data centre industry, to get to grips with these terms to facilitate client understanding regarding where LDeX sits in comparison with our competitors.

Anne-Marie Lavelle, Group Marketing Executive at LDeX Group

Tags:  connectivity  Cooling  CUE  data centre  Datacentre  Date Centre  efficiency  ISO standards  operational best practice  PCI DSS  PUE  WUE 

Share |
PermalinkComments (0)
 

The LDeX guide to streaming satellite feeds over IP

Posted By Anne-Marie Lavelle, London Data Exchange, 27 March 2014
Updated: 27 March 2014

The LDeX guide to streaming satellite feeds over IP


Since launching our first ever data centre company UK Grid back in 2004, the management team at LDeX has been involved with colocation clients that stream satellite feeds over IP. Although the Manchester based company was acquired in 2011, the management team went on to create LDeX, bringing with them a clear vision of wanting to maintain and enhance the relationships formed with media clients looking to stream their services through the usage of our facilities and low latency connectivity.

In the early days, the system was completely based on coax cabling and daisy chained multi switches which were quite difficult to manage and scale. Since then, the team at LDeX1, which is our new Data Centre facility in London, has developed a streaming platform that addresses the early pitfalls. The platform is working very well and we are attracting a global client base into LDeX due to our experience and knowledge of this infrastructure. I’m going to discuss below the elements which make our platform unique and attractive to the streaming world.

Optical Distribution for scale

Our Satellite system uses optical LNBs – this means the LNB is connected into our satellite distribution racks via a simplex fibre optic patch. The LNB takes the four polarities and stacks them together and then converts them to light, rather like a network trunk carrying multiple VLANs. The fibre patch is ruggedised as part of it is exposed to the outside elements. The fibre patch then connects into an optical splitter, which can split the light up to 32 times without amplification. From this point, the light from the splitter via another fibre patch is converted into electrical current using a Quatro convertor.

On the output side of the convertor, we can split the signal again and from this point, we use coax to connect into 32 way multi switches. We can then hand off feeds to our clients via coax cable that runs through our cable management system. Each individual coax feed is stacked with the entire frequency of the given satellite. The system is highly scalable and uses a single LNB, which enables us to deliver 2048 satellite feeds to clients. More than this could be possible – I’ll let you know when we get there.

All of this means we can hand off a single Coax to a client, we can hand off the four bands via 4 coax feeds and they can put their own multi switches in or we can simply hand off an optical feed and they can build their own distribution system. There is also another option for a client to site their own satellite dish within our compound on our gantry systems. We can go as far as offering N plus 1 satellites with two satellite dishes sited in different locations connecting into a common multi switch.

Low Latency Bandwidth for TV

Once our clients have converted their feeds into IP, they need a low latency multi-homed network to get the content to the users. We provide this on our network platform by peering with the largest IP networks in the world and also by peering on the largest Internet Exchange Points in Europe. This means we can connect our clients directly with the major ISPs globally and remove network hops and latency giving a consistent and fast direct connection. The network needs to be resilient and we have built in dual core network architecture especially for streaming with two major network POPs in LDeX1 and Telehouse North giving the capability to burst up to 10 Gbps.

Hand holding

Our support team here at LDeX1 has extensive knowledge in the physical infrastructure required to deliver satellite feeds and we also have the testing equipment on site to troubleshoot any issues with signal strength. We can deliver a satellite feed to a rack in LDeX1 on the same day as requested. The majority of our overseas clients ship equipment to us and we install it, configure it and cable for them using our in house team.

We also know that some of our clients don’t necessarily understand IP networks, BGP, routing etc – they don’t need to. Our experienced team of network engineers do understand this and we can provide the full network wrap around for any clients, our network team is qualified and highly experienced in Cisco and Juniper from CCNA up to CCIE.

LDeX is also developing a media eco system within LDeX1, which we have named ‘the London Media Platform’. Our clients are making full use of the platform already by doing business with each other. This eco system enables our clients to offer additional services to each other within the confines of LDeX.

To give an example of how it works, we have a media client in LDeX1 that requires channels from a satellite which can’t be picked up in the UK. This client has been able to liaise with another one our partners within our platform, which operates in the geographical area of the required satellite footprint. They are streaming this Satellite back into LDeX and onto the media client who then streams the channels to his end users.

So in summary, our streaming platform is comprised of three core elements.

1. A scalable distribution system
2. A scalable and resilient low latency network
3. A highly skilled team of Data Centre technicians and Network engineers.

Hopefully you can see how LDeX is rapidly becoming the media streaming platform of choice for streaming customers – we are certainly cooking up a stream!

Patrick Doyle, Operations Director and Co-owner at LDeX Group

 Attached Thumbnails:

Tags:  connectivity  Date Centre  IP  latency  optical distribution for scale  satellite systems 

Share |
PermalinkComments (1)
 
Sign In
Sign In securely
News

Data Centre Alliance

Privacy Policy