Cindy Ryborz, Marketing Manager DC EMEA, Corning Optical Communications
Data Centre Dilemmas: What Hurdles are Operators Facing in 2024
Data Centre Dilemmas: What Hurdles are Operators Facing in 2024
To no one’s surprise, in 2023 data consumption took another significant stride forward. According to research from JLL, focused on EMEA, the first half of the year saw the most data centre uptake on record across the tier-one European markets of Frankfurt, London, Amsterdam, Paris and Dublin – a jump of 65% compared to the same point in 2022.
In the MENA region, a recent Knight Frank report notes a combined uplift of 20.7% in live IT capacity across the region, primarily driven by demand from cloud deployment across the regions. In Europe, a similar report from Knight Frank notes the growth within emerging markets like Bucharest, Istanbul and Manchester. The latter destination's growth is in part driven by the likes of long-time Corning customer KAO Data who last year announced plans to open a £350M advanced data centre in the region.
In many ways the drivers for this sharp rise in data centre demand are the same as they have been for the last decade: a need for more and more bandwidth as new, data-intensive technologies and applications mature and are adopted more widely.
In recent years, a few factors in particular have sent this data consumption into overdrive. Firstly, the pandemic and the surge this created in applications such as streaming services and virtual conferencing. Bandwidth-hungry technologies like machine and deep learning are growing in adoption and now, a breakout year for artificial intelligence (AI) looks set to take this to even greater heights.
Building new data centres to meet this demand is costly and factors such as local planning permissions and power availability can add additional hurdles to the process. While colocation data centres do provide somewhat of a middle ground for securing more resource, for many data centre operators and businesses, the best option is to find ways to upgrade and "future-proof” existing resource – but how?
The drive for more density
While 400G Ethernet optical transceivers are used predominantly in hyperscale data centres, and many enterprise businesses are currently operating on 40G or 100G, data centre connectivity is already moving towards 800G and beyond. We can expect this to accelerate as IoT and artificial intelligence really take off in the enterprise.
As a result, there’s a growing list of considerations when it comes to upgrading infrastructure, not least of which whether to base it on a Base-8 or Base-16 solution – the former option is currently more flexible and the latter offers greater port density and an alternative path to 1.6T. Seeking cabling solutions that can handle the extensive GPU clusters needed to support generative AI —whether they comprise 16K or 24K GPUs – will also be key for some operators.
Perhaps the most universal consideration for data centre operators in 2024, however is how to maximise space. Requirements for increasing bandwidths in network expansions often conflict with a lack of space for additional racks and frames and simply adding more fibre optic interconnects is an unsustainable strategy, given land and power constraints.
Usefully, the latest network switches used to interconnect AI Servers in a data centre are well equipped to support 800G interconnects. Often, the transceiver ports on these network switches operate in breakout mode, where the 800G circuit is broken into two 400 or multiple 100 circuits. This enables data centre operators to increase the connectivity capability of the switch and interconnect more servers.
This technology is continuing to advance. Take a typical multi-mode 400G SR8 optical transceiver for example. It has 16 fibre connections and is a good choice for short reach applications, however advancements in optical technology allows more data on fibre and wavelengths. 400G SR4 optical transceivers are entering the market that cut the number of fibres to eight. These and other new optical transceivers go a long way toward helping data centres meet the rising demand for data.
Cooling will be key
In addition to its massive bandwidth demands, AI also creates an even greater need for power and cooling efficiency in the data centre. As an industry that’s already notoriously energy-hungry – and many businesses now with ambitious sustainability targets – this is a growing challenge. Some experts are even predicting a global water crisis as a result of “thirsty” applications like ChatGPT.
For those with the resources, clever choice of location can be one solution to cooling challenges – Meta (Facebook) even has multiple data centres in Luleå, the North Pole that utilises the region’s sub-zero air and sea temperatures.
There are of course also a number of smaller, more accessible approaches that can also be taken by data centre operators to support cooling – smart cabling choices for example. With the huge demands of AI however, it’s likely that these incremental changes and advantages aren’t going to scratch the surface.
Set to make a greater impact are a variety of cooling techniques being adopted in the market today – including air cooling, which utilises raised floor plenums and overhead ducts to distribute cool air to equipment. In-row cooling, in which multiple cooling units are placed directly in a row of server racks or above the cold aisles, is an efficient approach for these systems.
More emerging techniques include liquid immersion cooling, which involves submerging IT equipment (directly to the chip in some cases) in a dielectric fluid – avoiding this risk of consuming too much water. This method provides efficient cooling directly to components, minimising the need for air circulation. This type of cooling will however bring additional challenges when it comes to connectivity components, which will need to be resilient to the coolant.
Applications at the edge
2024 will see many companies build networks to support the development of their own large language models (LLMs). This requires the development of new inference networks where predictions are made by analysing new data sets. These networks can require higher throughput and lower latency and many operators will be looking to expand their infrastructure to support edge computing capabilities, bringing computation closer to the source of data.
Beyond this specific use case, edge computing is particularly valuable in scenarios where local analytics and rapid response times are needed, such as in a manufacturing environment that relies on AI, and also helps reduce networking costs. Looking forward, 5G will play a major role in maximising the capabilities of edge data centres, ensuring the incredibly low latency that is needed for the most demanding applications and use cases.
Enabling edge computing are colocation and hyperscalers working together to provide services that will support the required response times. Certainly, colocation will be key as these data centres can be positioned closer to users and offer adaptive infrastructure that can provide much needed flexibility and scalability in the face of unexpected events. It also alleviates the need for skilled labour on the end-users side.
Configuring and optimising these edge data centres, again, means a drive for ever greater fibre density, as well as modularity to allow for easier moves, adds and changes as data requirements grow.
The road ahead
For enterprises that are looking to deploy or grow their AI capabilities, there will be a number of key decisions to make within the next 12 months. Much like the initial transition to the cloud, one of the primary considerations will be what proportion of their AI workload will be managed on-premise and what will be offloaded to an external cloud environment.
Regardless of these choices, for the wider data centre industry, there’s a lot of work to be done to build and maintain resilient infrastructure that can support AI and other technologies not even conceived of yet.
These developments will continue to outpace data centre capacity and at an even greater pace as AI becomes more widely adopted. The priority for data centre operators – of all shapes and sizes – in 2024 will be to make the necessary adaptations to their infrastructure to stay agile and ready for whatever the future brings.
Let’s connect. Our experts are here to support every step of your way.
Whether you need help with your current implementation or planning for the future, we can help. Complete this form to get started.
Thank you for your submission!
A Corning representative will be contacting you shortly about your inquiry. Should you need immediate assistance please call our customer service line at +49 30 5303 2100.