After all, we’re talking about connecting over the public internet, where the best you can hope for is about 20-30 milliseconds round trip time just for the network. So who cares if the infrastructure distributing data to the RIAs adds a handful of milliseconds? That’s a small percentage of total latency right?
RIAs Not Just for the Public Internet
RIAs may have been created with the public internet in mind, but they’re so feature rich that companies are now using them as a single application/interface for people connecting over all kinds of other networks including:
Suddenly that RIA infrastructure latency that seemed like noise in context of an internet round trip time is significant on a much faster network.
The graph at right shows the latency of typical software-based web-streaming infrastructure in context of network latency. Over the internet, web streaming latencies look small, but on a sub millisecond internal network, that same streaming server introduces orders of magnitude more overhead than the network latency. Most web streaming infrastructures are written in software, many in java, and introduce latencies of 10 or more milliseconds (when loaded at thousands of connections with 100 messages per second each).
Hardware Brings True Low Latency to Web Streaming and RIAs
With thousands of connections each generating hundreds of messages per second, Solace’s web messaging infrastructure introduces about 50 microseconds of latency. That’s in the ballpark with some of the best low latency trading applications, and about 1, 000 times faster than competing software web streaming platforms. The chart at right shows the same data as above, but adds Solace web streaming latencies beside the software web streaming. Note that the Solace latency is bars are exaggerated (I had to make them 10 or so times actual size on the chart) in order to make them visible.
Stay Flexible My Friends
If you’re building trading applications and you can choose to save 5-10 milliseconds in a loaded web streaming environment, regardless of the underlying network, why wouldn’t you? On a high speed LAN, you’d be crazy to introduce that much latency, and over the internet, it’s a very meaningful reduction. As a bonus, that same environment is more reliable, has a smaller datacenter footprint, lower cost and is pre-integrated with inside-the-firewall messaging. RIAs are likely to find their way into many parts of the enterprise, inside and out, why not build on a platform that gives you the lowest possible latency and leaves you ready for anything?