Home > Blog > For Architects
Setting the Stage
In previous posts I talked about the principles of event-driven integration which, exports the complexities of integration to the “edge” whilst keeping the integration core (an event mesh) simple, focused on data routing and distribution. The intent of this architectural model is to enhance the speed of data in motion by reducing or relocating the number of integration points that are in the data path, and to provide a reliable and resilient infrastructure that not only scales well but it also ensures that as things scale, the traffic is maintained at a consistent level to ensure seamless business operations.
Subscribe to Our Blog
Get the latest trends, solutions, and insights into the event-driven future every week.
Thanks for subscribing.
The modern integration domain is dominated by the iPaaS. In this article, I will focus on how an event mesh can enhance certain iPaaS capabilities. I won’t play favorites with any of the iPaaS vendors, rather I will look at some of the general capabilities that define an iPaaS and explore how some of these capabilities can be improved by leveraging an event mesh core. So, what is a common set of capabilities that an iPaaS typically offers?
- Connectivity to data sources
- Connectivity to multiple protocols, and protocol bridging
- Mapping and Transformation
- Destination based Routing
- Content Based Routing
- Aggregating Content from multiple sources
- Dispatching/distributing content to multiple sources
- Lookups and Content enrichment
- Orchestration and choreography
This is by no means an exhaustive list of capabilities, and the degree of completeness to which these capabilities are implemented vary by iPaaS vendor, as some will focus more on some than others.
An event mesh offers many capabilities that in some cases are similar to what an iPaaS can offer. Given this, one can easily fall into the trap of thinking that an event mesh can replace the iPaaS in certain scenarios, but that would be an inaccurate assessment. As an event mesh is focused on data and event distribution, it only simplifies or enhances certain patterns of integration, rather than replacing them. As an event mesh can implement some of these capabilities natively, integration developers won’t have to spend time writing custom integration code for them, so in many ways an event mesh is a developer productivity tool, allowing developers to focus their time on writing business value logic vs. infrastructure logic.
The key value proposition in this context is to determine what use cases benefit most from the combination of an iPaaS and an event mesh i.e. what the iPaaS needs to do vs. what it can delegate to an event mesh.
In this article I will look at a few “traditional” patterns of integration that an event mesh can help with.
Integration Design Optimization
In the early days of telephony, making long distance calls, required an operator to manually connect the various lines, to establish a circuit between the two communicating parties. The operator “routed” the call from the caller to the callee. Fast forward a few decades, and telephony has become fully digitized, first via PBXs ISDN (Integrated Services Digital Network), that eventually got superseded by IP based telephony, where applications like WhatsApp, Facetime are the preferred means of voice and video communications.
So, what does this have to do with integration? It illustrates how new technologies are not only able to make interactions much easier, but that they are also able to shift complexity away from the interaction itself, and into the infrastructure.
In the integration realm, developers build APIs that simplify access to data, or access to an event stream. For example, a developer would much rather start developing an application by consuming events from an already aggregated event stream, rather than writing the code to do the aggregation and the consumption and the processing. And this is what this article is about: simplifying the integration design process, and enabling the faster delivery of integration projects, by leveraging the innate capabilities of an event mesh.
In the introduction, I outlined a few integration capabilities that an iPaaS typically has. Of those capabilities connectivity (2), destination-based routing (4), content aggregation and distribution (6,7) can be handled by the Event Mesh. These capabilities, that I will explore in this article in more detail, are more “single purpose” in nature in that they fulfill a specific task i.e. route events. There are also broader scoped capabilities that are used to implement broader patterns; however, those will be covered in a follow-up article.
Event Driven Integration Patterns
A capability can be implemented either by event mesh-native functions by building a solution out of iPaaS components, or by a combination of the two, depending on how complex the pattern is. In the following sections I will look at how an event mesh can be used to deliver on these capabilities. Throughout the remainder of this article, I will often use the more generic term “channel” as a stand-in for the more specific “topics” and “queues”.
Multiplexing/Demultiplexing
One of the most utilitarian aspects of data movement is data aggregation and dispatch/distribution or borrowing a term from telecommunications: multiplexing and de-multiplexing. In telecommunications multiplexing aggregates multiple signals from different input channels into one composite signal, whereas de-multiplexing does the reverse, it decomposes a complex signal into the constituent signals and pushes them out over respective outbound channels. This was typically done to save on bandwidth costs by using a shared medium/channel, and to reduce the many point-to-point connectivity challenges.
In the event streaming realm, one may want to “mashup” (multiplex) streams together into a single stream, that can then be used for real time analysis. For instance, if I aggregate all purchases from different locations into a single stream of payment events, one can look at patterns of payments in real time and obtain average payments per region, what location had the most sales for the day, what the diversity of purchases was etc. So, event aggregation is a very useful tool for gaining operational insight in real time.
In contrast, event distribution (de-multiplex) could have a single event stream, that requires certain aspects to be delivered to different destinations. For instance, following the purchase example above, you will likely want to distribute the purchase orders to respective geographical warehouses for processing and fulfillment.
Points 6 and 7 in the iPaaS capabilities list presented in the introduction are related to these aspects. You can certainly implement multiplexing and demultiplexing with an iPaaS, however this may not be the best use of a developer’s time, and it may also not result in the most optimal solution. This diagram illustrates how this might be done with an iPaaS:
In both the multiplexing and demultiplexing case you would have to create integration projects that would require you to have a connector to each destination (both inbound and outbound). In addition, for the demultiplexing scenario, you will need to add some type of processing logic to determine where to route the events on the incoming stream. In the multiplexing case illustrated in the diagram you are essentially coupling 3 components, and in the demultiplexing case you are potentially coupling 5 components.
With an iPaaS based implementation you are essentially creating a higher degree of coupling which leads to:
- more difficulty in scaling
- challenges with modifications: if for instance you want to add additional source or distribution channels you must also update the logic in the integration component.
- additional development time, increase in maintenance costs, and requiring more resources for deployment.
Given the extra cost of implementing this solution, the question arises: is there a better, more cost effective efficient way of implementing multiplexing and demultiplexing? The answer is of course yes. An event mesh can in fact be used to implement these two paradigms “effortlessly”.
An event mesh’s smart topics (using wild cards and variables in the hierarchical topic structure) and “destination bridging” capabilities (i.e. connecting topics to queues) implementing multiplexing and demultiplexing require no extraneous work, as shown in this diagram:
In the multiplexing case, a queue can receive events from multiple topics. The queue thus becomes the core channel that can enable real-time analytics for instance. In the demultiplexing case, variables can be used to greatly extend the “destination landscape”, so leveraging the same basic topic naming structure one can publish a wide array of events. On the receiving end one can get very granular with regards to what event set to consume. In the order processing example above if you submit purchase orders to a topic structure akin to orders/processed/{city}/{store} – where city and store are variables, then you can segregate the event stream and deliver the events to consumers that are interested in a particular city or a particular store within a city. The event mesh automatically routes the events to the respective destinations.
Moreover, one can easily combine the two patterns seamlessly to provide support for a wider array of scenarios, as illustrated here:
An event mesh can thus offer a solution for multiplexing and demultiplexing natively, without requiring any development effort, or any additional resources for deployment and management. The Publishers and the Subscribers can of course be any type of iPaaS process. This allows developers to focus on building quality business logic, and delegate some of the infrastructure related logic to the Event Mesh.
Data Path Clearing; Routing/Filtering
In the integration realm there are typically two types of routing:
- destination based routing: where an event comes over one channel, and it needs to be routed to other channels; the mapping from one channel to another is typically hard coded in the integration process.
- content/context-based routing: where events arriving over an input channel, are processed and based on payload content they get routed to a destination channel.
This relates to the 4th capability of the iPaaS listed in the introduction. Content/Context based routing is indeed a capability that does need to live in the iPaaS realm, as it may require more complex processing. Destination routing however is far simpler, and if built in an iPaaS it will consume unnecessary resources. The reality is also that destination routing is far more frequent and pervasive than context/content-based routing, so if the need to custom build destination routing is reduced, you can significantly reduce some of integration logic requirements.
The same event mesh capabilities that were used in the multiplexer/demultiplexer case can be brought to bear in the destination routing case. Leveraging the wildcards, variables and the hierarchical structure of event mesh topics, events can be routed and filtered on an event mesh with minimal configuration.
By using an event mesh as the core router, you are essentially freeing up the integration component from needing to do routing logic itself, thus making data path processing a little cleaner. This also has the natural side effect of speeding up traffic.
The same principle applies to filtering. You can of course still apply content-based filtering if you needed to get a specific behaviour, however if you design your topic taxonomy correctly then you can delegate that capability down to the event mesh as well.
What makes this delegation of routing and filtering possible, goes back to the wildcard capabilities described in the previous section. A subscriber can express interest in a set of events that travel over a specific topic structure. Essentially you can filter at the topic hierarchy level, and by virtue of that “filtering” expression, an event mesh will only route to your subscriber’s address the events described by that topic expression.
For instance, say that your topic is defined as: /order/processed/{city}/{store}/{orderid}.
This type of configuration allows you to subscribe to events as granular as the specific order level, including any subset of that level.
Assume that events are published to the following topics by difference publishing services:
/order/processed/Chicago/Store-ABC/11223344
/order/processed/Chicago/Store-ABC/11223399
/order/processed/Miami/Store-FLA/11998877
/order/processed/Chatanooga/Store-XYZ/55667788
/order/processed/New York/Store-EFG/33778899
Here are a few examples of subscriptions (of an extremely large possible number of combinations):
/order/processed/Ch*/*
: subscribe to all orders from Chicago and Chatanooga/order/processed/*/*/11*
: subscribe to all orders that begin with 11/order/processed/New York/>
: subscribe to all orders that were issued in New York
There are an infinite number of possible destination filtering/routing mechanisms that can offer a high degree of granularity requirements.
In summary, by using an event mesh for destination-based routing and filtering, you are getting better overall performance, and you simplify your integration logic by eliminating the need to embed routing logic in integration process.
Protocol Bridging
The primary intent of integration is to connect different applications, systems and protocols. Many iPaaS platforms have protocol connectors (e.g. TCP, HTTP, FTPS, JMS, MQTT, AMQP etc.) that they use to marshal information across environments and applications. In doing so, the integration processes will often act as “protocol bridges”. This type of pattern is aligned with item 2 on the iPaaS capability list outlined in the introduction.
For instance, many enterprises have a mixed eventing landscape i.e. IBM MQ for Mainframe integration, JMS for enterprise integration, Kafka for stream processing, MQTT for IoT etc. As enterprises expand to the cloud they invertedly encounter additional eventing platforms like Google PubSub, Amazon SQS etc. In short, unlike the REST-full domain, where all things ride on HTTP and there is no need for “protocol integration”, in the Event-full world the diverse protocol landscape complicates things, especially if you want to have events traverse protocols. As an example, say you are receiving IoT events overt MQTT, and you need to publish those events to Kafka for real time analytics, and to IBM MQ (JMS) to update metrics on the DB2 database on the mainframe. How would you accomplish that?
In the iPaaS world this can easily be accomplished by creating an integration project that would ingest events via an MQTT connector and dispatch the same events to both Kafka and MQ via the respective connectors. This assumes no requirements for event transformation. Sounds easy enough, but there are some caveats:
- The more messaging brokers you have in the environment the more pair wise coding you have to write thereby increasing the potential for configuration errors
- Developers would have to know how all protocols work, and how to configure them all (i.e. MQTT, JMS, Kafka etc.) which makes life more difficult.
- Inconsistent monitoring as there is no “unifying” view of events crossing all three eventing platforms
- Scaling the integration components could be challenging, as it could result in event duplication
One of the great attributes of an event mesh is that it has built in bridging capabilities, that do not require developers to be “all-knowing”! The Developer could just use the event mesh’s native interface to publish and subscribe to events, or if they are comfortable with just say one protocol i.e. JMS they can use that to interact with the event mesh. An event mesh bridges would connect the source and destination protocols at the infrastructure level making the developer’s life a lot easier. The other benefit of this is that the event processing logic can remain the same but the mesh can be reconfigured to interact with other protocols. Say you want to replace MQTT with Google PubSub. You would still interact with the same topic, however under the covers that topic is now receiving events from the PubSub+ broker instead of the MQTT Broker. The diagrams below illustrates this paradigm.
Essentially leveraging the event mesh’s bridging capabilities would eliminate the need to write bridging code in the integration layer, would reduce configuration errors and would require less resources. Additionally, because the event mesh is a connecting fabric to all other protocols, it has complete visibility of the journey of events during their entire lifecycle, across al the eventing protocols in the environment, so observability and monitoring are essentially “unified”.
Scalability
The patterns described above are even more impactful at large scale. One of the great features of the event mesh is the ability of the nodes within the mesh to dynamically exchange with each other subscriber lists; this ability affords the mesh the knowledge of how to most optimally route events across broad geographical areas. For instance, orders published in North America can be instantly received in Europe or Asia without any coding or configuration required.
Protocol bridging becomes more powerful and impactful as the variety and distribution of other eventing platforms increases. Being able to seamlessly transfer events across any cloud, and on-prem environment, and across any combination of cloud eventing and enterprise eventing platforms, does not only simplify the developer’s life but it also reduces the amount and cost of operational management.
In short, for global scale integration projects, an event mesh can bring about many optimizations and cost reductions.
Summary
In this article I explored how an event mesh can complement certain iPaaS functions. An event mesh is not meant to replace iPaaS capabilities, but to give in certain cases an alternative solution to certain problems. The intent is to simplify certain processes and shift (where appropriate) complexity from the iPaaS to the event mesh, where that complexity is essentially neutralized by the native capabilities of the event mesh.
Leveraging an event mesh where appropriate, frees up developers’ time and allows them to focus on higher value add tasks. In reducing the need to write infrastructure related code, one has less processes to deploy and manage (thereby being more resource efficient) and allows you to benefit more from the extensive scalability capabilities of an event mesh. This leads to a shorter time to market for projects, reduces errors and maintenance costs, and makes for a more stable overall architecture.
Explore other posts from categories: Event-Driven Integration | For Architects
Bruno has over 25 years of experience in IT, in a wide variety of roles (developer, architect, product manager, director of IT/Architecture, ), and is always looking to find ways to stimulate the creative process.
Bruno often takes unorthodox routes in order to arrive at the optimal solution/design. By bringing together diverse domain knowledge and expertise from different disciplines he always tries to look at things from multiple angles and follow a philosophy of making design a way of life. He has managed geographically distributed development and field teams, and instituted collaboration and knowledge sharing as a core tenet. He's always fostered a culture founded firmly on principles of responsibility and creativity, thereby engendering a process of continuous growth and innovation.
Subscribe to Our Blog
Get the latest trends, solutions, and insights into the event-driven future every week.
Thanks for subscribing.