Introduction
A quote commonly found on the internet reads “knowledge is knowing that a tomato is a fruit. Wisdom is not putting it in a fruit salad.” Machine Learning (ML) would lead to knowing a tomato is a fruit, but Artificial Intelligence (AI) would suggest not putting it in a salad. Jokes aside, ML would eventually pick it up. There really is so much more to AI and ML than meets the eye, from language translations to more accurate diagnosis of complex diseases. To give you an idea of how much computing power AI and ML need, training in 2017 showed one of Baidu’s Chinese speech recognition models required not only four terabytes of training data, but also 20 exaflops of compute, or 20 billion, math operations across the entire training cycle.
The balance that providers need to meet with AI and ML is delivering the highest quality of service at the lowest cost. How does one provide the highest quality of service? Well this comes from reducing latency and being able to handle the bandwidth demands of future applications. The effect of latency can be improved by reducing the physical distance that the data must travel between the device and the processor. Overall these latency demands are driving smaller data centers closer to where data is created and consumed. This optimizes transmission costs and quality of service. The second balance is to look for the lowest cost of utilizing these applications. In the past, the architecture increased costs with the amount of data moved and the distance or “hops.” AI and ML dramatically increased the amount of data being transferred which resulted in greater transport costs. Edge data centers are increasingly the answer because of their proximity to where the data is being created, and multitenant data centers (MTDCs) are where a good portion of the edge will be housed. MTDCs offer the lowest risk in deploying a local data center while also being the fastest speed to revenue platform.