A recent article in the Wall Street & Technology journal summarized the consensus of experts who attended a conference dealing with low latency on Wall Street. The main premise was this: “The weakest link in the low-latency value chain is older software or poorly written code, not market data feeds, lack of ultra-fast processors or older networks.”
I’ve experienced this myself in previous jobs, and it’s not inaccurate. The problem is when people run with that and think they should fix the software side of the equation with a blind eye to a full end-to-end solution that encompasses all technologies. It’s a dangerous leap of logic exemplified by the article’s subtitle: “Panel at Accelerating Wall Street conference says improving older or poorly written code has greater potential to lower latency than new hardware.”
Don’t get me wrong – optimizing your applications is a critical piece of the puzzle, and speeding up your app might be the single biggest performance gain you can realize. I also agree that throwing new faster Intel boxes at the problem and running the same poor codebase on faster hardware is a poor choice. But as I said in the title, latency is everywhere. If you don’t continuously explore the game-changing potential of new technologies (such as 10GigE, cut-through switches, TCP offload engines and hardware-based messaging, in-memory databases, caching technology) you will forever lag behind competitors who take a holistic approach to latency and invest the time and energy to incorporate important new technologies into their system.
In today’s day and age, you have to explore every piece of technology to achieve the lowest possible latency. There’s just too many exciting developments happening in the areas of networking, middleware and new disk technologies, etc.
Explore other posts from category: Use Cases