In the high-speed messaging world, performance is measured in some number of messages per second. We’ve had some questions via the website and in customer meetings asking us to clarify how we count msgs/sec at Solace. There are two key numbers on the website: 130, 000 msgs/sec and 10 million msgs/sec. For full context on what they mean, read on.
On the surface, it seems pretty simple. If I have an application that sends 100 messages to another application and it takes a second that is 100 msgs/sec. Virtually all messaging benchmarks work this way, some number of messages pass from one application to another through a messaging system. But not all real-world applications work this way.
When considering guaranteed messaging applications, one in and one out is the typical use case. An order comes in, gets routed and goes out. Sometimes a single message in could result in multiple messages going out, maybe to the end application and a compliance database, but that is not the majority scenario.
But the story changes for reliable messaging. The most common scenario for very high rate reliable messaging is market data delivery, a case in which the majority of use cases are publish-subscribe setups with fanout delivery of many more outbound messages than inbound. How do you count those messages?
If you run a benchmark with 5 million msgs/sec inbound and 5 million msgs/sec outbound (same number of clients and servers), how many msgs/sec is that? Is it 5 million msgs/sec? Or 10?
What about a scenario where a publisher is sending 1M msgs/sec in but thanks to fanout 9M msgs/sec are sent along to subscribers — how many msgs/sec is that? Is that 1 million msgs/sec? Or 9? Maybe 10?
If you followed all that, here’s how we do it at Solace. These practices represent the way most vendors quote performance in each area, and the way our customers have told us makes the most sense given what they’re looking to do with the different kinds of messaging we enable.
To tie this into 10 GigE Week, let’s do the math on how many reliable messages you should be able to send through a single 10 GigE network link. Assume 100 byte messages of user payload and that the max effective throughput on a 10 GigE network is about 9.2 Gigabits.
Now in addition to application payload there is messaging/topic overhead, as well as TCP/IP and Ethernet overhead per frame, but the resulting overhead depends on your topic structure and the packing factor of messages into frames. If you make some reasonable assumptions on message overhead and for maximum rate, where you are less sensitive to latency, you pack messages into frames, you can achieve about 9.5M msgs/sec within the 9.2Gbps of usable bandwidth in one direction.
In practical applications, no-one would push this close to the maximum bandwidth, so de-rate that to 9M msgs/sec out but at the same time supporting >1M msgs/sec in and that is why we quote “more than 10M msgs/sec” as our maximum rate. Throughput beyond 10M msg/sec would require multiple NABs and multiple 10GigE network links – or alternatively, smaller messages.
So there you have it, an exhaustive explanation of what we mean when we say “messages per second.”