A few short years ago, the buzz around Infiniband for low-latency algorithmic trading was deafening. If you absolutely needed the lowest latency possible, and were willing to write special adapters to special APIs, Infiniband’s eye-popping latency numbers were very appealing when compared with software running over GigaBit Ethernet.

Sure it was expensive and proprietary, and you only had a few vendors to choose from, but if you had to move bits and bytes from place to place as fast as possible in 2006, Infiniband was the bomb. Every serious performance-minded algorithmic trading architect was talking about it. (Although, even then, only a few went beyond playing with it in their labs…)

Today though, Ethernet is on a serious roll and poor Infiniband is under assault from all sides. In the past year, momentum has been growing in the following areas:

  • 10 GigE equipment is available and abundant.
  • Cut-through switches are producing Ethernet latencies in the nanoseconds (less than a microsecond).
  • Efforts at QoS over Ethernet are improving reliability and reducing jitter.
  • A wealth of TCP off-load engines (TOEs) have emerged to handle network processing in hardware,   similar to what a host channel adapter does for Infiniband but at lower cost.

Thanks to these increasingly affordable technologies, you can now get performance approximating Infiniband out of Ethernet. This means you don’t have to deal with the considerable downsides of Infiniband, such as:

  • It’s a separate, parallel network just for low-latency HPC, so if you want to share data between anything on your TCP network and your Infiniband applications, you have to bridge the gap between the networks.
  • Having two distinct networks built on different technologies means you have two systems to manage, two teams to staff up and keep trained, etc.
  • There’s very little commercial software support for Infiniband, and most enterprises don’t have the expertise to write to Infiniband APIs and optimize the behavior of their own drivers down in the microseconds.

If that’s not enough, when you take a step back from low-latency financial use cases, Data Center Ethernet is emerging with the right message for the times: “Get rid of the capex and complexity of your parallel LAN, SAN and HPC network and converge everything onto a single uber-fast Ethernet-based inter-connect.”

Finally, there’s around 1, 000 times as much money being poured into Ethernet innovation as there is being spent advancing Infiniband, and obviously a much larger set of solution providers. Many providers and rapid innovation means Ethernet will only get faster and increasingly affordable over time.

Game, set, match.

That’s why even those performance-minded architects who would soapbox for Infiniband two years ago only bring it up today to see if you agree that Ethernet makes Infiniband irrelevant.

Larry Neumann

From 2005 to 2017, Mr. Neumann was responsible for all aspects of strategic, corporate, product and vertical marketing. Before Solace, he held executive marketing positions with TIBCO and Oracle, and co-founded an internet software company called inCommon which was acquired by TIBCO. During his tenure at TIBCO, Mr. Neumann played a key role in planning company strategic direction relating to target markets and candidate acquisitions.