x86, ASICs, and Relativity
“Someone pointed out I had two different-colored socks on. I said, ‘To me they’re the same; I go by thickness.”
– Steven Wright
F5’s Lori MacVittle made an interesting post recently arguing the necessity for ASICs (as opposed to x86) in networking infrastructure. In short she argues that latency is a bad thing in networks and eliminating latency requires ASICs.
Below are some relevant thoughts from our CTO Robert Bays. But first, I’m really glad Lori made this post because it pulls forward one of the most critical things to understand when a technology market goes through an evolution, and that is the distinction between technology absolutes vs relative customer requirements.
1. x86 and RELATIVE performance
Lori’s definitely correct in that there is always room at the peak of performance requirements for specialized hardware. This is true in compute, storage and networking. If you’re pushing the limits, off-the-shelf components may not work. But that’s not the majority of customer demand. Example: In 2011 alone nearly $8B will be spent on traditional midrange secure routing products. And thanks to Intel’s recent advancements, x86 cuts through those products like a hot knife through butter. And that Intel train isn’t stopping anytime soon. This recently-published white paper projects the speed of Vyatta-on-Intel to increase 10X with the next iteration of Xeon, enabled in part by Intel-sponsored software tools. With this much horsepower available, x86 will continue to be able to address the majority of the RELATIVE market requirements.
2. Network virtual machines and RELATIVE performance
The value of network VMs is based on their immense operational flexibility and the fact that they perform roles for smaller subsets of a multi-tenant datacenter. Need a datacenter network fabric that runs 40Gb/s? Today, buy specialized hardware. Need to connect and secure various application VMs within a group of servers at speeds that meet their independent requirements? Spin up network VMs; they’re more than fast enough.
‘Nuff said from me. Here’s the view from Robert:
There are definitely use cases where ASICs, or similar, are called for. High port density low complexity devices where the underlying protocols are well defined are a good fit. Traditional switches are an excellent example. I don’t expect standard x86 to be able to compete in that market anytime relatively soon. However ASICs are not a panacea. Setting aside for a moment the high development, maintenance, and support costs associated with ASICs, one can’t expect purpose built silicon to handle the increasingly complex requirements in the forwarding plane of today’s converged devices especially not without an expensive respin of the hardware. In more and more modern network appliances at least a portion of the forwarding pipeline is being pulled into a general purpose computing environment.
Fortunately, the question of variability in packet latency is not due to general purpose hardware per se, in this case x86, but instead to the software architecture running on the general purpose system. Eliminating multiple layers of schedulers and locks goes a long way towards creating a deterministic forwarding path and therefore reducing or eliminating system induced jitter.
Intel has made great advances in treating a generic x86 core as if it were a task specific network processor. They announced the Data Plane Developers Kit at IDF that acts like a Multi-Core Executive Environment and provides guaranteed latency at an order of magnitude greater throughput than existing software architectures. This combined with further advances in the Intel hardware architecture are proving that x86 is competitive, on performance, latency and cost, with purpose built network processors even when scaling to 40G+. Products based on this technology are in development now.
For products shipping today, the jitter effects induced by any one well behaved software stack on a general purpose CPU are usually negligible in comparison to the variability and latency of the entire path of the packet through the Internet. Admittedly, there are high traffic load scenarios where software on a general purpose CPU creates a bottleneck. But similar bottleneck scenarios exist at many points along the path, not just in software network stacks. Fortunately for everyone the Internet usually continues to work transparently; packet drops, jitter, and all. The end user needs to first ask themselves whether or not their traffic load will ever approach the limits imposed by existing software stack based products. If not, then the cost and reduced complexity of a software solution may work well for them. There are numerous use cases where this proves to be true. Software based networking is a single tool in a very big toolkit.
Looking forward, requirements are increasingly forcing networking functions into the purely software environment of the hypervisor and cloud. The organization who develops a software network stack that is reliable, consistent, and full featured in spite of the limitations imposed by the environment will define the networks of tomorrow.
Entry filed under: Uncategorized.