Apple does it, Facebook does it, Google does it. Why not Goldman, JP Morgan and UBS?
No, I’m not talking about deploying open source software or embracing cloud computing whole hog, I’m talking about choosing datacenter locations based on cost instead of convenience.
Yahoo!, Google, Microsoft and Dell have all built massive datacenters in Quincy, Washington along the Columbia River to take advantage of cheap, clean power (hydro from the river) and local tax incentives. Apple, Google and Facebook followed the same motivations to rural North Carolina to bring down the costs of their online services. Meanwhile, if you ask the CIO of any of the big banks, the cost of building and operating datacenters in New York, New Jersey or London are among their top concerns. What’s stopping them from employing the same strategy?
Much has been written about how high-speed trading is all about proximity to exchanges and that logic has led to an explosion in datacenter growth in New Jersey. But how many applications within a large investment bank really need to be across the street from the exchange? Less than 5%?
Imagine for a minute if front office operations remained collocated with exchanges in New York or London in a dramatically scaled down datacenter, and less latency-sensitive middle and back-office applications were moved somewhere more affordable. It doesn’t matter if it is Iceland, North Carolina or Timbuktu – the key is to follow the logical economic blueprint laid out by the internet companies.
Sure, a hedge fund or boutique trading firm would not have sufficient scale of operations to benefit from such an approach, but the global investment banks have huge datacenter operations measuring in the tens of thousands of servers, very much like the internet companies. The people cost of operations would be much lower as well, as the cost of living in a rural outpost like Wales or North Carolina is a fraction of the NY or London areas.
It’s not even a huge shift architecturally – front, middle and back-office applications are already decoupled by message passing architectures. It’s really about parceling out the pieces of the private cloud differently and modeling to make sure there are no data flow hotspots that exceed the physics of the distance. The supporting technologies like high-throughput wide-area links, in-memory data grids, data synchronization, high-rate messaging, and WAN optimization are available today to make this a reality.
Explore other posts from category: Use Cases