One of the biggest wealth management banks in the United States, with over a trillion dollars under management and an additional trillion under custody, prides itself on being a leader on the technology front. They pioneered the use of computers to process financial statements back in the 1950s, and today offer trading platforms popular with hedge funds, FX traders, and other financial institutions.

The 3 Business Goals of Their Investment in IT

As part of their continuous investment in tech, the banks regularly review their tech stack and look for opportunities to excise or upgrade outdated solutions that prevent them from meeting three clear business objectives:

  • Agility to compete through rapid innovation: These days success in the financial services space means not just outflanking longstanding competitors, but keeping aggressive financial technology (FinTech) firms looking to obtain market share.
  • Offer an unparalleled customer experience: High net worth individuals and institutions demand the ability to access information and transact in real-time, from anywhere. That means the bank needs to make all of its data and services accessible via mobile devices, externally accessible APIs, and IoT devices, all with the security and stability that’s led their customers to trust them in the first place.
  • Ensure compliance with data policy regulations: Like all banks, they need to adhere not just to general purpose data privacy regulations like GDPR, but to a litany of regulatory requirements that dictate how and where financial information is hosted and distributed. As such it’s essential that they’re able to consistently apply frequently changing controls across their enterprise.

Evolving Infrastructure to Achieve Business Objectives

The bank is currently implementing a matrix data-driven architecture that aggregates all of their information in one place, so it can be utilized across platforms for a variety of services. They’ve just made a major investment in digitizing their business, which includes transforming batch processes to real-time with both synchronous and asynchronous APIs.

  • Enabling real-time information flow across their enterprise: Like all enterprises that have grown through M&A, the bank’s infrastructure included a myriad of technologies and environments. The best decisions come from correlating information across divisions, domains, and geographies, so they wanted to establish a “single source of truth” that drew from diverse systems that speak different languages and live in different places. Doing so requires documentation of APIs and their dependencies like schemas and application actors, along with their versions and environments.
  • Securely expose event-driven APIs: Partnerships are important revenue drivers for this bank, so they wanted to make it easy for external consumers to access data. But since not all event-driven APIs can be publicly exposed, they needed to make them available on a case-by-case basis and let partners test-drive the API before implementing it in projects. As such they needed a way to share and test the functionality of APIs without disrupting existing integrations.
  • Conserve, reuse and secure real-time resources: While developers are motivated to consume resources on servers and event brokers, cleaning up isn’t their mission, and the bank noted the need to clean up unneeded artifacts that were slowing down servers, making it hard to enforce security policies, and were hard to manage especially in light of a new chargeback mechanism that required identifying each and every resource.

Their Technical Landscape

Based on those business and technical objectives, their architects focused on improving how the bank moved real-time information between applications within the firm and to partners. They were using both synchronous and asynchronous interaction patterns, but there was a stark contrast between the governance of the two.

Synchronous APIs Were Well Managed

Thanks to Layer 7 API management, their synchronous APIs had decent governance, and stakeholders understood who owned the data and what data was available. When a synchronous API changed, there was a well-defined process for making sure that dependent applications had the chance to modify code.

Asynchronous APIs…Not so Much

Their asynchronous APIs, on the other hand, suffered in several key areas:

  • Broken dependencies: With event-driven architecture, a change to one event can impact many consumers, and the decoupled nature of asynchronous communications makes it hard to track down each one of those consumers, which was causing broken dependencies.
  • Underutilized data: Without a centralized repository of all events available, they weren’t exploiting events to their fullest.
  • Duplication of effort: Since developers across the company didn’t know what was out there in terms of events, they’d frequently waste time resolving problems.

One of their data architects once said, “My job is playing data Whac-A-Mole – I don’t know where anything is, where it’s going, or how it’s used.”

A big part of the challenge was the blend of different technologies used for asynchronous communication, not all of which blend harmoniously – MQ message brokers, Solace event brokers, and Kafka event streaming.

Cleaning Up their Kafka Problem with PubSub+ Event Portal

They knew they needed to improve their governance and implementation of asynchronous APIs, but how? After realizing their existing API management software wouldn’t do the trick because asynchronous APIs don’t reliably link one server and one client, they knew they needed a tool that could track one-to-many dependencies and efficiently manage the complex characteristics of events like topic strings, queues, and consumer groups.

Stop Playing Whack-a-Mole with Your Kafka Event Streams!PubSub+ Event Portal for Kafka is a tool that allows you to visualize and manage your Kafka event streams across your lines of business.Read NowSolly logo

That’s when they discovered our PubSub+ Event Portal product – a first-of-its-kind product the lets architects and developers collaborate on the creation, management, governance and reuse of event streams, which would:

  • Work with all of their event brokers, forming an architectural layer that encompassed them all.
  • Show, in visual form, how events flow between producers and consumers, making it easier to understand complex event interactions.
  • Let them scan and import configurations from Kafka clusters, Solace brokers, and soon IBM MQ, capturing details like topic strings and consumer groups in a central repository, enabling better event reuse.
  • Enable them to expose event-driven APIs to external partners in a governed and secure way.

Scanning their Kafka Environment

They thought Kafka was the least of their event-driven concerns since it was a newcomer to their scene, unlike IBM MQ and Solace PubSub+ event brokers. With that in mind, they set out to scan their Kafka environment first before moving on to the main event of Solace and IBM MQ.

The scan agent went to work, detecting the applications, consumer groups, and topics on their Kafka cluster. A few moments later, the architects gathered around to get a glimpse at their environment. They saw over six thousand topics, including 552 that weren’t being utilized by any applications, and many that contained circular dependences!

Optimizing Kafka Events Across their Organization

That raw data and the relationships shown gave their architects what they needed to reach out across the enterprise and show SMEs what was going on, and going wrong, with their events. This spurred developers across the company to clean up unused topics and better define how applications interact. With that yeoman’s work done, they’re now discussing at a more strategic level how events should be structured, and improving their change management process.

Conclusion

This bank’s event-driven journey shows how PubSub+ Event Portal can help large enterprises efficiently build an event-driven system that spans their organization and is easy to manage, secure, and scale.

For more on how Solace helps financial services build a better digital experience with PubSub+, check out our financial services solutions page.

Jesse Menning

As an architect in Solace’s Office of the CTO, Jesse helps organizations of all kinds design integration systems that take advantage of event-driven architecture and microservices to deliver amazing performance, robustness, and scalability. Prior to his tenure with Solace, Jesse was an independent consultant who helped companies design application infrastructure and middleware systems around IBM products like MQ, WebSphere, DataPower Gateway, Application Connect Enterprise and Transformation Extender.

Jesse holds a BA from Hope College and a masters from the University of Michigan, and has achieved certification with both Boomi and Mulesoft technologies. When he’s not designing the fastest, most robust, most scalable enterprise computing systems in the world, Jesse enjoys playing hockey, skiing and swimming.