It’s no secret that bandwidth demand is skyrocketing as our world becomes ever more connected and data-intensive applications like artificial intelligence grow in maturity. In response, the data center industry has needed to adapt and grow. Today’s data centres must be capable of storing and managing massive amounts of information – while being reliable and flexible to support customer needs 24/7 – and be up and running as quickly as possible.
The types of data centre we see today ranges in scale from small edge and private enterprise data centres to multitenant and hyperscale or huge campuses. Data center interconnect applications have emerged as an important and fast-growing segment to support large scale deployments – which can often see as many as 10,000+ fibers connecting large data halls across a campus.
The traditional approach in the outside plant environment is to splice these fibers. With counts of 1728, 3456 – or even higher – it can take weeks to do the necessary preparation, connect these quantities of fiber and bring them inside the building. We’ve also seen a shift inside the data center to higher speed and higher density network infrastructures, moving from a three-tier architecture to leaf-spine networks supporting the increased east-west data traffic.
Compounding this challenge, across Europe, the industry is also experiencing a shortage of skilled workers. Even with a skilled labor force, lots of splicing can be challenging and it’s very easy to misalign a splice or to splice the wrong ribbon.
When it comes to large-scale data center projects, time is money – and operators know they can ill-afford to allow delays in network installation to stand in the way of meeting ever-increasing demands for bandwidth. How can operators solve this?