In this Post

    Download PDF

    Introduction

    If your company is like most, the applications that power your business run in diverse environments, including on-premises datacenters, field offices, manufacturing floors, and retail stores – to name a few.

    The lack of compatibility and connectivity between these environments forces your applications to interact via tightly-coupled synchronous request/reply interactions, individually customized batch-oriented ETL processes, or even bespoke integration. These interactions result in slow responses and stale data. Tight coupling hampers agility with your ever-changing business needs and real-time demands.

    For an enterprise to be real-time, events must be streamed between the applications and microservices that process them so insights can be gleaned and decisions can be made – fast. To become more responsive and to take advantage of new technologies (cloud, IoT, microservices, etc.), your enterprise architecture needs to support real-time, event-driven interactions.

    Every business process is basically a series of events. An “event” can broadly be described as a change notification. These changes can have a variety of forms, but all have the common structure of an action that has occurred on an object. Event-driven architecture is just a way of building enterprise IT systems that lets loosely coupled applications and microservices produce and consume these events.

    Implementing event-driven architecture is a journey, and like all journeys it begins with a single step. To get started down this path, you need to have a good understanding of your data, but more importantly, you need to adopt an event-first mindset.

    Sumeet Puri, chief technology and solutions officer, will explain a surefire strategy for how your enterprise can implement event-driven architecture with the appropriate tools to overcome challenges you may run into along the way. With his field-proven six-step process, you will be on your journey to getting key stakeholder engagement and transforming your entire organization.

    The Event-First Mindset

    Event-driven architecture is not new; GUIs and capital markets trading platforms have always been built this way. The reality is that it’s just becoming more mainstream now. This is primarily because service-oriented architecture (SOA); extract, transform and load (ETL); and batch-based approaches need to evolve to meet real-time needs.

    But before you begin this journey to transform your enterprise architecture into a more responsive, agile, and real-time architecture, you must come to think of everything that happens in your business as a digital event, and think of those digital events as first-class citizens in your IT infrastructure.

    With event-driven architecture, applications and microservices talk through events or event adapters. Events are routed among these applications in a publish/subscribe manner according to subscriptions that indicate their interest in all manner of topics.

    In other words, rather than a batch process or an Enterprise Service Bus (ESB) orchestrating a flow, business flows are dynamically choreographed based on each business logic component’s interest and capability. This makes it much easier to add new applications, as they can tap into the event stream without affecting any other system, do their thing, and add value.

    So how do you do that? What should your strategy be?

    At a high level, if your organization wants to become an event-driven enterprise, you need to do these three things:

    1. Event-enable your existing systems
    2. Modernize your platform to support streaming across environments
    3. Alert & inform internal and external stakeholders

    So the next question how do you put these steps into practice? The following six steps use the above strategy and have been proven to make the journey to event-driven architecture faster, smoother, and less risky in many real-world implementations.

    Six steps to be event driven

    Step 1: Culture, Awareness, and Intent

    Are you ready, aware, and have the intent to implement event-driven architecture?

    I’m sure if you’re reading this, then the answer is “yes!” However, most mainstream people in IT were trained in school to think procedurally. Whether you started with Fortran or

    C like me, or Java or Node, most IT training and experience has been around synchronous function calls or RPC calls or Web Services to APIs – all synchronous. There are exceptions to the rule – capital markets front office systems have always been event-driven because they had to be real-time from the start! But more often than not, IT needs a little culture change.

    Microservices don’t need to be calling each other, creating a distributed monolith. As an architect, as long as you know which applications/microservices consume and produce which events, you can just choreograph using a publish/subscribe event broker – or a distributed network of brokers (also known as an event mesh) – rather than orchestrate with an ESB.

    It’s important to ponder a little, read up, educate yourself, and educate stakeholders about the benefits of EDA: agility, responsiveness, and better customer experiences. Build support, strategize, and ensure that the next project – the next transformation, the next microservice, the next API – will be done the event-driven way.

    You will need to think about which use cases can be candidates, look at them through the event-driven lens, and articulate a go-to approach to realize the benefits.

    Think real-time. Think event-driven.

    Step 2: Identify Candidates for Real-Time

    Not all systems need change or can be changed to real-time. But the majority will benefit from an event-driven approach.

    You’ve probably already thought about a bunch of projects, APIs, or candidates that would benefit from being real-time.

    What are the real-time candidates in your enterprise? The troublesome order management system? The next generation payments platform that you are building? Would it make better sense to push master data (such as price updates or PLM recipe changes) to downstream applications, instead of polling for it? Are you driven by the possibility of a real-time airline loyalty points upgrade upon the scanning of a boarding pass?

    Or real-time airport ground operations optimization?

    There will be multiple candidates with different priorities and challenges, so look for projects which will:

    • Remove pain (i.e., brittleness, performance issues, new functionality)
    • Cause a medium to high business impact
    • Serve as a quick win that will deliver business value and breathe optimism into your team

    By starting with one small project, you can begin to find the quirks unique to your organization that will inevitably be present at a larger scale. Keep in mind that not every data source is a candidate for event-driven architecture. Find a system that has robust data generation capabilities and can be easily modified to submit messages. These messages will be harnessed as your events later in the journey.

    Make a shortlist of real-time candidates for your event-driven journey. It’s also important to bring in the stakeholders at this early stage because events are often business-driven, so not all your stakeholders will be technical. Consider which teams will be impacted by the transformation, and which specific people will need to be involved and buy-in to make the project a success from all sides.

    As businesses become more real-time and focus more on the customer experience, application architecture needs to be upgraded to meet these needs, and event-driven architecture is the great paradigm.”

    Sumeet Puri,
    Chief Technology Solutions Officer

    Step 3: Build Your Eventing Foundation

    And now the tooling. Once the first project is identified, it’s time to think about architecture and tooling.

    An event-driven architecture will depend on decomposing flows into microservices, and putting in place a runtime fabric that lets the microservices talk to each other in a publish/subscribe, one-to-many, distributed fashion.

    It’s also important to start the design-time right, and have the tooling to ensure that events can be described and cataloged, and have their relationships visualized.

    You’ll want to have an architecture that’s modular enough to meet all of your use cases, and flexible enough to receive data from everywhere that you have data deployed

    (on-premises, private cloud, public cloud, etc.). This is where the eventing platform comes in with the event mesh runtime.

    Having an eventing platform in place (even the most basic pieces of it) allows your first project to leverage it as microservices and events start to come online and communicate with each other, rather than via REST, resulting in a distributed monolith.

    The following runtime and design-time pieces are essential:

    Event Broker

    An event broker is the fundamental runtime component for event routing in a publish/ subscribe, low latency, and guaranteed delivery manner. Applications and microservices are decoupled from each other and communicate via the event broker. Events are published to the event broker via topics following a topic hierarchy (or taxonomy), correspondingly subscribed to by one or more applications or microservices, or analytics engines or data lakes.

    An ideal event broker uses open protocols and APIs to avoid vendor lock-in. With open standards, you have the flexibility of choosing the appropriate event broker provider over time. Think about the independence TCP/IP gave to customers choosing networking gear – open standards made the internet happen. By leveraging the open source community, it’s easier to create on-the-fly changes, and you’re not stuck having to consult closed documentation or sit in a support queue.

    Lastly, an ideal event broker is simple. It offers simplicity in deployment, event governance, and scalability, which gives you the freedom to focus on what matters: your events.

    Event Mesh

    While you might start with a single event broker in a single location, modern applications are often distributed. Whether it’s on-premises, in multiple clouds, or in factories, branch offices, and retail stores – applications, microservices, and insight capabilities are distributed.

    An event originating in a retail store may have to go to the local store’s systems, the centralized ERP, the cloud-based data lake, and to an external partner. As such, event distribution should be transparent to producers and consumers – they should be connecting to their local event broker, just like you or I would connect to our home WiFi router to access all websites, no matter where they are hosted.

    Event-driven architecture is a constantly evolving system, so if you identify event sources on-premises today, you may find that those events live in the cloud tomorrow.

    An event mesh is a network of event brokers that dynamically routes events between applications no matter where they are deployed – on-premises or in any cloud or event at IoT edge locations. Just like the internet is made possible by routers converging routing tables, the event mesh converges topic subscriptions with various optimizations for distributed event streaming.

    There are multiple ways an event mesh supports your application architecture:

    • Connects and choreographs microservices using publish/subscribe, topic filtering, and guaranteed delivery over a distributed network
    • Pushes events from on-premises to cloud services and applications
    • Enables digital transformations for Internet of Things (IoT)
    • Enables Data as a Service (DaaS) across lines of business for insights, analytics, ML, and more
    • Gives you a much more reactive and responsive way to aggregate events

    Of course, how you deploy your event mesh will help determine the kind of event broker you’re looking for.

    Event Portal

    An event portal – just like an API portal – is your design and runtime view into your event mesh. An event portal gives architects an easy GUI-based tool to define and design events in a governed manner, and offers them for use by publishers and subscribers. Once you have defined events, you can design microservices or applications in the event portal and choose which events they will consume and produce by browsing and searching the event catalog.

    As your events get defined, they are enlisted in an event catalog for discovery. An event catalog gives you visibility into your events. Although it’s more of a documentation step, this will help to visualize and describe the events that you’re able to process. When building out the system for more event sources and different ways to consume them, the catalog is a great reference to know what’s already been built so you can avoid duplication of effort.

    Event Taxonomy

    Topic routing is the lifeblood of an event-driven architecture. Topics are metadata of events; tags of the form a/b/c – just like an HTTP URL or a File Path – which describe the event. The event broker understands topics and can route events based on who subscribed to them, including wildcard subscriptions.

    As you start your event-driven architecture journey, it’s important to pay attention to topic taxonomy – setting up a topic naming convention early on and governing it. A solid taxonomy is probably the most important design investment you will make, so it’s best not to take shortcuts here. Having a good taxonomy will significantly help with event routing, and the conventions will be obvious to application developers. For more on topic hierarchy best practices, visit: bit.ly/topic-hierarchy

    Udders Ice Cream

    Udders Ice Cream Example: Event Taxonomy

    Let’s look at an order example for Udders Ice Cream. An order for the flavor rum & raisin has come in from the Lazada ecommerce website in Singapore:

    Event
    Topic
    New Order
    order/init/1.1/icecream/udders/rumraisin/sg/lazada
    Validated Order
    order/valid/1.1/icecream/udders/rumraisin/sg/lazada
    Order Shipment
    order/shipped/1.1/icecream/udders/rumraisin/sg/lazada

    Because events from the publisher (microservice, application, legacy-to-event adapter) have topics as metadata, consumers can use it to subscribe to events.

    Event
    Topic
    Subscriber
    New Order
    Lazada ecommerce store site
    Order validation microservice
    Validated Order
    Order Validation Microservice
    Order processor for ice creams in Singapore
    Order Shipment
    Any upstream
    Data lake or AI/ML ingestor

    Topic subscriptions would be:

    • Consume and validate all orders of version 1.1 and publish the order valid message: order/init/1.1/>
    • Consume and process all valid orders for ice cream originating in Singapore: order/valid/*/icecream/*/*/sg/*
    • All orders no matter what stage, category, location: order/>

    Organizational Alignment

    Event-driven architecture starts small and grows, but over time, the organization must evolve to event-first thinking. This requires some thought leadership to get buy-in. The ESB team needs to start thinking about choreography rather than orchestration. The API teams need to start thinking about event-driven APIs rather than just request/reply.

    Step 4: Pick a Pilot Application

    The next step for implementing your event-driven architecture is to determine which event flow to get started with first.

    Essentially, an event flow is a business process that’s translated into a technical process. The event flow is the way an event is generated, sent through your broker, and eventually consumed.

    Getting started with a pilot project is the best way to learn through experimentation before you scale.

    With the pilot, as you identify events and the event flow, the event catalog will also automatically start taking shape. Whether you maintain the catalog in a simple spreadsheet or in an event portal, the initial event catalog also serves as a starting point of reusable events – which applications in the future will be able to consume off the event mesh.

    You want to choose a flow that makes the most sense for your current state modernization or pain reduction. An inflight project or an upcoming transformation make ideal candidates, whether it’s for innovation or technical debt reduction via performance, robustness, or cloud adoption. The goal is for it to become a reference for future implementations. Pick a flow that can be your quick win.

    Consider this product price change flow as an example:

    The price change itself is considered the event – ‘price’ being the noun, and ‘change’ being the verb.

    In an existing architecture that is not event-enabled, the price change event is:

    • Not pushed – a Request/Reply API is required, or probably even a batch process
    • Not real-time – downstream applications don’t know about the price change until they ask
    • Not cost effective to scale
    • Bursty – event applications continuously call the API, which can cause load/burst

    By event-enabling the price change and implementing event-driven architecture with an event mesh, downstream apps subscribe to the price change events and receive event notifications as they happen in real-time. The event mesh filters and only pushes the events that the downstream applications have subscribed to (in accordance with the topic taxonomy).

    Events are queued and throttled to reduce load, making it easier to scale. They are also delivered in a lossless, guaranteed manner.‌

    Publish Event to the Event Mesh

    Step 5: Decompose the Event Flow Into Asynchronous, Event-Driven Microservices

    Once pilot flows have been identified and an initial event catalog is starting, the next step is to start an event-driven design by decomposing the business flow into event-driven microservices, and identify events in the process.

    Decomposing an event flow into microservices reduces the total effort required to ingest event sources, as each microservice will handle a single aspect of the total event flow. New business logic can be built using microservices, while existing applications – SAP, mainframe, custom apps – can be event-enabled with adapters.

    Microservices Orchestration vs. Choreography

    There are two ways to manage your microservices in your event flows: orchestration and choreography. With orchestration, your microservices work in a call-and-response (request/reply) fashion, and they’re tightly coupled, i.e., highly dependent on each other, tightly wired into each other. With event routing choreography, microservices are reactive (responding to events as they happen) and loosely coupled — which means that if one application fails, business services not dependent on it can keep on working while the issue is resolved.

    In this step, you’ll need to identify which steps have to occur synchronously and which ones can be asynchronous. Synchronous steps are the ones that need to happen when the application or API invoking the flow is waiting for a response, or blocking.

    Asynchronous events can happen after the fact and often in parallel – such as logging, audit, or writing to a data lake. In other words, applications that are fine with being “eventually consistent” can be dealt with asynchronously. The events float around, and microservices choreography determines how they get processed. Because of these key distinctions, you should keep the synchronous parts of the flow separate from the asynchronous parts.

    This modeling of event-driven processes can be done manually at first or with an event portal, where you can visualize and choreograph the microservices.

    Udders Ice Cream

    Udders Ice Cream Example: Eventually Consistent

    One of the problems with a RESTful synchronous API-based system is that each step of the business flow – each microservice – is inline. When the Udders Ice Cream POS system submits an order, should it wait for all the order processing steps to be completed, or should only a few mandatory steps be inline?

    If you think about it, all the “insights” processes do not need to happen before the point of sale systems get an order confirmation – they are just going to slow down the overall response time. In reality, only the order ingestion and validation need to be inline – everything else can be done asynchronously.
    The event mesh provides guaranteed delivery

    The event mesh provides guaranteed delivery, so the non-inline microservices can get the data a little after the fact, but in a throttled, guaranteed manner. The event stream flows into the systems in a parallel manner, thereby further improving performance and latency.

    Parasitic Listeners

    As events are flowing and are cataloged, they can also be consumed by analytics, audits, compliance, and data lakes. Authorized applications will receive a wildcard-based, filtered stream of events in a distributed manner – with no change to the existing microservice, and no ESB or ETL changes.

    Event Sources and Sinks – Dealing with Legacy Applications

    There are a couple of event sources and sinks that you should be aware of. Some sources and sinks are already event-driven, and there’s little processing you’ll need to do to consume these events. Let’s call the others “API-ready.” API-ready means they have an API that can generate messages that can be ingested by your event broker, usually through another microservice or application. Although there’s some development effort required, you can get these sources to work with your event broker fairly easily. Then there are legacy applications that aren’t primed in any way to send or receive events, making it difficult to incorporate them into event-driven applications and processes.

    Here’s a breakdown of the three kinds of event sources and sinks:

    Event-Driven
    API-Ready
    Legacy
    API Status?
    Natively event-driven
    Have a REST/SOAP API, with schemas
    No standards-based API, though there might be various ways to connect
    Push/Pull Abilities?
    Can push events, can consume events
    Request/reply, no notion of topics or push
    Data needs to be pulled, or polled, but may also be triggered
    Adapters for Streaming?
    None needed
    Need microservices or streaming pipeline to transform request-reply source into a streaming destination, and publish intelligent topic
    Need an integration adapter (JDBC, MQ, JCA, ASAPIO, Striim, legacy ESBs) installed close to the source of the destination of the event to transform and pub/sub events

    Event-Driven Applications

    Event-driven applications can publish and subscribe to events, and are topic and taxonomy aware. They are real-time, responsive, and leverage the event mesh for event routing, eventual consistency, and deferred execution.

    API-Ready Applications

    While API-ready applications may not be able to publish events to the relevant topic, they can publish events that can be consumed via an API. In this case, an adapter microservice can be used to subscribe to the “raw” API (an event mesh can convert REST APIs to topics), inspect the payload, and publish the message to appropriate topics derived from the contents of the payload. This approach works better than content-based routing, as content-based routing requires the governance of payloads in very strict manners down to semantics, which is not always practical.

    Legacy Applications

    You’ll probably have quite a few legacy systems that are not even API-enabled. As most business processes consume or contribute data to these legacy systems, you’ll need to determine how to integrate them with your event mesh. Are you going to poll the system? Or invoke them via adapters when there is a request for data or updates? Or are you going to off-ramp events and cache them externally? In any case, you’ll need to figure out how to event-enable legacy systems that don’t even have an API to talk to.

    The key to accommodating legacy systems is identifying them early and getting to work on them as quickly as possible.

    For more details and examples, visit: bit.ly/sources-and-sinks.

    Go Cloud-Native As You Can

    The accessibility and scalability of the cloud reflects that of an event mesh. They’re perfect candidates to deploy together. That said, you can’t expect to have all of your events generated or processed in the cloud. To ignore on-premises systems would be a glaring oversight. With an event mesh implemented, it doesn’t matter where your data is, so you can more easily take a hybrid approach.

    For example, your data lake might be in Azure while your AI and ML capabilities might be in GCP. You might have manufacturing and logistics in China or Korea and a market in India or the USA. With 5G about to unleash the next wave of global connectivity, your event-driven backbone needs to work hand-in-hand with two patterns, 180 degrees apart: hybrid/multi-cloud and an intelligent edge.

    By using standardized protocols, taxonomy, and an event mesh, you are free to run business logic wherever you want.

    Udders Ice Cream

    Udders Ice Cream Example: Order Management

    An order management process for Udders Ice Cream may see these microservices in an event flow following a point of sale:

    • Order validator
    • Credit check
    • Inventory check
    • Payment processor
    • Order processor

    An order for rum raisin ice cream is initiated when a point of sale system makes an API call to submit it:
    Order INIT Event
    The Order Validator microservice consumes the new order event, and produces the valid order, or invalid order event. The valid order event is then consumed by the Credit Check microservice, which similarly produces the next events.
    The Order Validator microservice consumes the new order event

    Subscribed Events: all order initiations, irrespective of version and product
    Published Events: order validation status with other meta data

    Udders Ice Cream

    Udders Ice Cream Example: Event Routing

    Issue: Order validation rules have been changed from Singapore to the US and payload has changed from JSON to protobufs.
    Issue: Order validation rules
    Solution: A new (reused) microservice is created to serve the business logic of validating orders from the US with the new payload and rewire topic subscriptions. The new microservices start listening to the relevant topic, consuming order initiation events from US of the version 1.2. Nothing else changed!
    Solution: A new (reused) microservice is created to serve the business logic

    Issue: Want to monitor all orders coming from the 711 chain.
    Solution: Implement a microservice with the relevant Insights business logic, look up the event in the event catalog, and start subscribing to events using wildcards!
    Solution: Implement a microservice with the relevant Insights business logic,

    Step 6: Get a Quick Win

    Once the design is done and the first application gets delivered as an event-native application, the event catalog also starts to get populated.

    Getting a quick win with the event catalog as a main deliverable is just as important as the other business logic, and it drives innovation and reuse.

    Hopefully, with the ability to do more things in real-time, you can demonstrate agility and responsiveness of applications, which in turn leads to a better customer experience. With stakeholder engagement, you can help transform the whole organization!

    Bonus Step: Rinse & Repeat

    Now that you have your template for becoming event-driven and hopefully your first quick win in place, you can rinse and repeat. Go ahead and choose your next event flow!

    From here, you can walk through the steps again for the next few projects. As you go, the event catalog in your event portal will be populated with more and more events. Existing events will start to get reused by new consumers and producers. A richer catalog will open up more opportunities for real-time processing and insights.

    Culture change is a constant but gets easier with showcasing success. The integration/ middleware team might own the event catalog and event mesh, while the application and LOB teams use it/contribute to it. LOB-specific event catalogs and localized event brokers are also desirable patterns, depending on how federated or centralized the organization’s technology teams and processes are.‌

    As you scale, more applications produce and consume events, often starting with reusing existing events. That is how the event-driven journey snowballs as it scales!

    Conclusion

    Constantly changing, real-time business needs demand one thing: digital transformation. The world is not slowing down, so your best bet is to identify ways you can – cost effectively and efficiently –upgrade your enterprise architecture to keep up with the times.

    But it’s not an easy task.

    Most people in IT were trained in school to think procedurally. Whether you started with Fortran or C or Java or Node, most IT training and experience has been around

    function calls, RPC calls, Web Services, APIs, or ESB orchestrating flows – all synchronous. Naturally, dealing with the world of asynchronous messaging requires a little culture change, a little thought process switch.

    Microservices don’t need to be calling each other, creating a distributed monolith. Events can float around on an event mesh to be consumed and acted upon by your microservices.

    Architects and developers need a platform and a set of tools to help them work together to achieve the real-time, event-driven goals for their organizations.

    With these six steps, you can make the leap with the correct strategy and tools in hand that will support your real-time, event-driven journey. Solace PubSub+ Platform helps enterprises design, deploy, and manage event-driven architectures and can be deployed in every cloud and platform as a service. Solace is the only stable and performant solution that fits the unique needs of architects looking to implement event-driven architecture within their organizations.

    Sumeet Puri
    Sumeet Puri
    Chief Technology Solutions Officer

    Sumeet Puri is Solace's chief technology solutions officer. His expertise includes architecting large-scale enterprise systems in various domains, including trading platforms, core banking, telecommunications, supply chain, machine to machine track and trace and more, and more recently in spaces related to big data, mobility, IoT, and analytics.

    Sumeet's experience encompasses evangelizing, strategizing, architecting, program managing, designing and developing complex IT systems from portals to middleware to real-time CEP, BPM and analytics, with paradigms such as low latency, event driven architecture, SOA, etc. using various technology stacks. The breadth and depth of Sumeet's experience makes him sought after as a speaker, and he authored the popular Architect's Guide to Event-Driven Architecture.

    Prior to his tenure with Solace, Sumeet held systems engineering roles for TIBCO and BEA, and he holds a bachelors of engineering with honors, in the field of electronics and communications, from Punjab Engineering College