For an industry so obsessed with decoupling, layering and software-ization, it seems ironic that many conversations about these end up at the very bottom: the silicon. Yes, the cloud is changing everything, but the speed of that impact is tied, ultimately and directly, right back to network capabilities. So in telecom, how should we calibrate the dependency between software and hardware? Where exactly do we draw the line? And how do we justify that decision?
On the whole, Telecom has been fairly resistant (at least only grudgingly accepting) to the idea that software can replace sophisticated, dedicated hardware. Software is what engineers let their IT colleagues PoC away at, while they get on with the – as it were – heavy lifting of telecom.
But there is a wary truce breaking out. Software-ization is an inexorable trend that cannot be reversed. First, software can unlock the capabilities of hardware/silicon. But even more fundamentally, there are some actions which are still most efficiently handled in silicon. The industry is starting to accept that it can be ok to retain targeted islands of hardware in cases where it is just going to be way better at dealing with the load. And that at the silicon level, an understanding of what else is going on higher up the stack might be a really useful (and efficient) way to do something that’s both fast and smart.
“Service-Aware Silicon” is a good summary of this development, using one niche as a tangible example. It’s a position that’s reflected in the launch of Nokia’s new FP5 chip, which we offer an analysis of in our latest research report.
Our recommendation is that CSPs and other buyers of network kit think widely about their choice of silicon – beyond simple throughput metrics. Sure, the world is moving ever faster, but speed isn’t the only game in telecom today.