Rik Turner of CBR recently wrote an interesting story about Intel trying to break into the low-latency financial services space by courting FPGA chip manufacturers and solution providers leveraging FPGAs to partner with Intel as they launch their Nehalem technology, faster front-side bus and FPGA co-processing capabilities.
Of course the idea of hosting FPGA co-processing is not new, AMD has been offering a version of this approach for over two years. Intel is clearly playing catch-up here. It’s also not surprising that Intel would be concerned about any key market moving processing to specialized hardware that can outperform software on Intel processors by 10, 20 or even 50 times. Especially if one box of non-Intel special-purpose hardware can replace the work of 10 to 30 Intel boxes running software.
This is a classic case of if you can’t beat ‘em join ‘em, which has been a successful strategy for Intel in the past. The question is, who exactly will they be joining?
For FPGA manufacturers, this is a mixed blessing. The positive is that it can allow more FPGA processors to be sold. The negative case is that it levels the playing field and pushes the technology towards low margin commodity faster than through custom platform. It will be harder for FPGA manufacturers to differentiate their products in a plug-and-play world where they are subservient to Intel’s processors.
Companies that rely on FPGA technology to deliver differentiated products (like Solace) could benefit in some cases. While custom designed boards are likely to always be faster, the lower costs from using commodity parts could allow hardware companies to line extend to more tiers of offerings at more attractive hardware price points, while releveraging the same FPGA code. Perhaps software and servers on general processors result in performance X, FPGA co-processing on an Intel or AMD motherboard might offer performance improvements of 3-5X and a custom hardware solution may offer performance improvements of 10-50X. That could be intriguing to firms looking to take high-end products downmarket at lower price points. But the co-processor architecture will not displace the very high-end requirements. Co-processing will favor a blended software/hardware solution, which introduces some amount of context switching, which in non-CPU intensive applications (feed handlers, messaging, etc) is the primary source of bottlenecks today.
Who will NOT be coming along for the ride is most software ISVs or enterprise customers. Designing hardware solutions is not for the faint of heart, it is a significant long term commitment that is expensive and time consuming. There may be one FGPA code designer available for every 1, 000 or maybe even 10, 000 software developers. This scarcity makes this risky to staff within IT and even within most software vendors. That leaves only the most performance-sensitive, highly motivated suppliers to commit to the FPGA path. By definition, those are the suppliers that need all the performance juice they can get, not a half-way, lower-cost solution.
There are niche applications that are hardcore CPU crunchers that may jump on board. Simulation and quant engines are candidates. Specialized algorithmic trading applications, where the algorithms are sufficiently long lived to be worth coding in FPGAs, may be another. But the target in financial services is not broad. It’s a couple niche cases within the already narrow low-latency financial services market.
This awkward mismatch of motivations between Intel, FPGA providers and low-latency specialist solution vendors leaves Intel with ambitions that will be challenging to convert to meaningful market share. It’s telling that AMD’s been at this for a while and has not made noteworthy progress thus far.
Please comment if you have opinions on how this may play out for Intel (or AMD).
Explore other posts from category: Use Cases