Over the past few decades we’ve seen network technology evolve from 10 Mbit Ethernet to 10 Gbit, with 40 GigE on the horizon and 100 GigE already being discussed. Is the network outpacing the capabilities of the computer? In many ways, yes.

One of the dirty little secrets of today’s operating systems is that even in multi-CPU hosts all the interrupts from the network have to be processed on a single CPU. This means a single 1 GigE card can overwhelm even the fastest commercial processors available today. So what will 10x the throughput do for your servers if the CPU/OS is the bottleneck? In many cases, the answer is not very much. I think many administrators and architects may be prematurely falling in love with 10GigE when it won’t really solve as many of their problems as they hope that it will.

Don’t get me wrong, there are lots of applications and environments out there that will benefit greatly from 10GigE. And there’s a whole slew of cool new applications and services and capabilities that it will make possible. I just think it’s important that people recognize that 10GigE isn’t a silver bullet that will open up a new world of performance in every situation.

10 GigE and Software-based Messaging

Many 10 GigE network cards are more than simple NICs, they are actually TCP Offload Engines (TOEs). TOEs don’t just perform low level network functions, they do all TCP/IP processing, taking it over from the OS. In some cases, this reduces interrupts and context switches on the CPU. Unfortunately, most of these TOE functions are directed toward Web Server applications where lots and lots of connections are negotiated as users connect, request a page and immediately disconnect. If you are doing messaging through your server, and clients are continuously connected, many of the standard TOE features have little to no benefit.

Both NICs and TOEs support interrupt coalescence, which reduces the number of interrupts to the server CPU and improves the performance of the server. This improves aggregate throughput and allows you to bypass the 1 GigE limitation against the host CPU. Unfortunately, interrupt coalescence introduces latency, which in messaging scenarios is not cool.

Busses and Brokers

20 years ago Scott McNealy said “The network is the computer.” He was ahead of his time, but distributed computing is now the norm, thanks to messaging. There are basically two ways to do messaging: through a broker via TCP or through a bus with UDP multicast. In either model, there are some problems with 10GigE.

Obviously, not all messaging participants will migrate to 10 GigE right away, and adding 10GigE to some of the consumers or producers in a multicast environment will undeniably increase the frequency and severity of multicast storms caused by speed mismatches.

You’d think that in broker-based messaging systems adding 10 GigE to the broker would allow more aggregate throughput and therefore more clients per broker, which could reduce the number of servers required. But the gating factors for software-based message brokers are OS, disk speed and memory access performance. It’s because of these limitations that message brokers have trouble exceeding 50, 000 messages per second for 1K messages (performance easily handled by 1 GigE). With larger messages the message rate goes down, but the bit rate on the wire increases. In this case 10 GigE will show some improvement since with larger messages and limited multiple consumer fan-out the outbound bit rate for a good software message broker is below 3 Gbit/s.

The aforementioned numbers are for reliable messaging. When you’re talking about guaranteed messaging with failsafe storage, the best software-based message brokers can only handle about 5, 000 1K messages per second. Again, the bottleneck is the operating system and disk performance, not the 1GigE network. With larger messages, 1 MByte for example, performance drops to around 20 messages per second, still well within the realm of 1 GigE.

How can 10GigE help a hardware-based messaging system?

A truly hardware-based message broker, like Solace’s, has no operating system or CPU in the data path, so all those concerns about interrupts and throughput through the OS are a complete non-factor. In fact, Solace’s 10 GigE interface processes all 7 layers of the ISO model in hardware, not just the lower 4 like a TOE. By performing all message processing in hardware, we can route 5, 000, 000 messages a second and take full advantage of 10 GigE bandwidth.

The 10 GigE interface in the Solace router is also automatically a dual interface configuration to allow high availability. The hardware architecture also allows full real-time monitoring of statistics and threshold events, even at 10 GigE wire rates, without affecting message throughput or latency in the slightest. With a software-based router monitoring of the connections decreases performance, and many of the statistic gathering features are disable in production to increase performance. Messaging clients benefit, too. The Solace router makes use of VRRP between the 10 GigE interfaces in the HA router pair, so on fail-over to a different router, the client does not need to do anything, it is all transparent.


While 10 GigE will help in certain client applications, it generally won’t help much when you’re talking about messaging servers. It will help on servers when latency reduction is a key requirement – such as in HPC or ultra-low latency algorithmic trading applications. For hardware-based messaging brokers that do not suffer from the same limitations as software, however, it is a different story with a much happier ending.

Larry Neumann

From 2005 to 2017, Mr. Neumann was responsible for all aspects of strategic, corporate, product and vertical marketing. Before Solace, he held executive marketing positions with TIBCO and Oracle, and co-founded an internet software company called inCommon which was acquired by TIBCO. During his tenure at TIBCO, Mr. Neumann played a key role in planning company strategic direction relating to target markets and candidate acquisitions.