In Nov 2008, we released a set of benchmarks for ultra-low latency market data distribution. The tests, run on an all-hardware architecture consisting of 1 GigE technology from Solace, Arista and NetEffect, showed the fastest real-world messaging benchmark performance ever seen — until today.
In conjunction with the launch of the new 10 GigE version of our Network Acceleration Blade, we repeated these tests in a 10 GigE environment. In addition to a 10 GigE-equipped Solace 3260 Content Router we used NetEffect 10 GigE adapters to perform TCP offload in the clients and servers, with an Arista 10 GigE cut-through switch as the layer 2 switch.
To provide an apples-to-apples comparison with the previous benchmarks, we reran the test that simulated all US equities traffic (500K msgs/sec) and the test that simulated peak OPRA market data (1 million msgs/sec.) Then we added to the mix a 2M msgs/sec test to show how performance changes as data rates continue to go up.
Once again, the performance was superb, and the latency distribution nice and tight. The system demonstrated latency 20-25% lower than the previous 1 GigE numbers and improve upon our already best-in-class performance. Here’s a summary of the results:
Check out complete details about the performance results,
hardware configuration and test parameters.
On Transparency and Tuning
Speaking of complete details about the tests, when we published our benchmarks in November we received a lot of positive feedback for publishing all of the significant details rather than cherry picking the best bits of data. We’ve always been committed to the idea that to have any value at all benchmarks must reflect real-world conditions and be reproducible by customers. Frankly we think it’s dishonest to publish performance numbers based on tests that use tricks like message packing, throwing away a large chunk of the outliers, and using unrealistic message sizes.
That’s why when we run a market data distribution test like this, we don’t use message packing, we publish our 99.9th percentile latency numbers, and we use message sizes very much like those seen in real-world market data distribution environments — 100 byte messages with 12 byte subjects.
It’s also important to note that these tests are run on an off-the-shelf configuration of the hardware mentioned above. The last time we published benchmarks many people assumed we had tweaked and tuned our environment endlessly to achieve such great numbers, but since that time many customers and prospects have reproduced our results in their own environments in less than a day. If performance at high volumes is important to you, we invite you to do the same.
We believe our transparent and reproducible benchmarks have contributed to the accelerated acceptance of hardware-based messaging. Six months ago, many people were still asking “Will hardware messaging replace software messaging?” Today, that question is settled among serious messaging performance junkies — now they’re talking about strategies and timing of deployment and migration.
Explore other posts from category: Products & Technology