We often get asked how Solace PubSub+ compares with alternative event streaming platforms and other technologies that help enterprises enable the adoption of event-driven architecture. From open-source software like Apache Kafka, Apache ActiveMQ, and RabbitMQ, to commercial event brokers like Confluent Cloud, Amazon Simple Queue Service (SQS), and Red Hat AMQ, and even to integration technologies like MuleSoft’s Anypoint Platform, people considering Solace want to know where we fit, how we stack up and, more broadly, how to think about our platform and the value it can help them realize.
The points of reference for comparing event streaming platforms and technologies for event-driven architecture can range, so I’m here to offer a clear and honest perspective on the options that are out there, the specific problems and opportunities that can be addressed with our platform, and why I think Solace is leading the pack.
Our most unique, most differentiating, and most powerful value proposition is that we help enterprises adopt, manage, and leverage event-driven architecture (EDA). In my view we do it better than any company in the world, and in this post I’ll explain why I believe that.
It all stems from our perspective on the two major capabilities enterprises need to adopt and leverage event-driven architecture at scale:
From this perspective, here is my view of the market:
Now let me explain my rationale for evaluating EDA enabling technologies in this way, and why I have Solace PubSub+ leading on each axis.
In a nutshell, you need two things: (1) enterprise-grade event streaming and (2) event management capabilities.
Let’s start with the rationale for the x-axis and the y-axis.
We know event-driven architecture is a rising priority for modern enterprises that want to be more real-time, agile, and resilient in this era of unprecedented business disruption, technological advancement, and rising consumer expectations. But event-driven architecture can be difficult to implement. Especially when:
When I boil these challenges down and think about what enterprises are trying to achieve with event-driven architecture, I think they need two things:
By enterprise-grade event streaming, I mean the capabilities that enable an enterprise to stream events quickly, reliably, and securely (and here’s the tricky part) across hybrid-cloud, multi-cloud, and IoT environments, and across other elements that may be part of your supply/value chain (such as factories, stores, warehouses, and distribution centers).
If you’re thinking multi-cloud and IoT are not relevant to you today, ask yourself if they might be soon. Or ever. Chances are they’ll be relevant sooner than later. So you need an event streaming layer that can cover those complex environments.
You will need that event streaming layer to be performant, reliable, and resilient. Maybe you don’t need to move events at the speed of light. Maybe you don’t need five-nines availability. But if you want to unlock the benefits of event-driven architecture, you need events to be transmitted (produced and consumed) more or less as they happen, and in the correct temporal order. And you can’t afford downtime, or to lose data, or to have the transmission of an event delayed due to geography.
For these reasons, features like low latency, high availability, disaster recovery and WAN optimization are important. In many cases they are critical.
You also need events to be streamed in the most efficient manner possible. You will also want to be specific about what data you stream to and from the cloud to minimize egress costs. You also do not want applications or people to have to sift through reams of data to get to what they want to consume. For these reasons, a rich topic hierarchy and fine-grained filtering are important.
Here is a great 5-minute video on the importance of topic hierarchy:
Event Topics: Kafka vs. Solace for Implementation
Lastly, you will want your event streaming layer to support multiple open standard protocols and APIs, to connect with the variety of IT and OT you have or expect to have — including legacy applications, new microservices, serverless apps and third-party SaaS — all while avoiding technology and/or vendor lock-in.
Why have I not included capabilities for event stream processing or streaming analytics in my list of fundamental criteria for enterprise-grade event streaming? In short, that’s because I view those as use cases or things you can do with event-driven architecture rather than things that are fundamental to building and supporting event-driven architecture.
We believe event-driven architecture is a critical and foundational layer needed to support a wide variety of use cases, such as event-driven integration, microservices, IoT and streaming analytics, so what you have in place should enable and support those things, but they are not fundamental to it.
So that’s my rationale for the y-axis and the criteria that informs the relative positioning of the products and technologies along it (more on that below). Now for the x-axis.
By event management, I mean the tools that help developers, architects, and other stakeholders work efficiently and collaboratively in designing, deploying and managing events and event-driven applications. Think of it like a toolset similar to those associated with API management, but for the real-time, event-driven world.
API Management, Meet Event ManagementA break-down of the value proposition for API management platforms and what API management means for events.These capabilities become increasingly important as you scale your use of event-driven architecture. That’s when most organizations we work with start thinking about how they can expose and share event streams and event schemas so they can be used and reused across the enterprise. After that, they start sweating things like event governance and yearning for a way to visualize their topologies.
The truth is that most enterprises will make their way to some sort of event management toolset when architects and developers can’t answer basic questions like:
Enterprises can’t hope to adopt event-driven architecture at scale or maximize the business value of their events if stakeholders can’t answer those basic questions. And while Visio, Excel spreadsheets and Jira are great tools, they were not purpose-built to solve this problem.
By the way, analysts from Gartner, Forrester and other firms have noted the need for (and lack of) tooling in this area. Feel free to reach out and ask about it.
For event-based interfaces, apply the principles of API management, but recognize that the tooling in this market is still emerging.”
So that’s my rationale for the x-axis and the criteria that informs the relative positioning of the products and technologies along it.
Now let me explain the relative ranking/positioning of the technologies.
I have Solace PubSub+ Event Portal leading on the x-axis, so let me start with event management because I think that’s the more straightforward case to make.
Solace PubSub+ Event Portal is the first and only (as of publishing) product of its kind. It’s a purpose-built event management toolset that can be used to design, discover, catalog, visualize, secure and manage all the events and event-driven apps in an event-driven architecture ecosystem.
More precisely, developers and architects can use PubSub+ Event Portal to:
PubSub+ Event Portal covers many of the requirements enterprises will need to adopt, manage, and leverage event-driven architecture scale. Our vision is to have it be a universal event management toolset you can use with any of the popular open-source and commercial event brokers you may be using. At the time of writing, you can use it to scan event streams from PubSub+ event brokers, Apache Kafka brokers, and other Kafka distributions from Confluent and Amazon Managed Streaming for Apache Kafka (MSK).
Understanding the Concept of an Event Portal – An API Portal for EventsAn event portal, like an API portal, provides a single place for an enterprise to design, create, discover, share, secure, manage, and visualize events.There are alternatives for some of this stuff.
Confluent and Red Hat both have schema registries that provide a REST interface for storing and retrieving event schemas. But I believe both registries are meant more for enforcing the schema at runtime (to prevent runtime errors for events and APIs) and for data validation than they are about making events and schemas discoverable so they can be used, managed and re-used over time. My colleague and Solace’s Field CTO, Jonathan Schabowsky, wrote an interesting blog post on this topic.
Here is what Shawn McAllister, our CTO and CPO, said when I asked him about this:
Schema registry is used by publisher apps to store the schema associated with an event stream, and it’s used by consumers to know how to decode events — so it’s used at runtime. Our Event Portal reads schemas from the schema registry as part of ‘discovery’ and then loads them into Event Portal so users can understand, manipulate and evolve them. So Event Portal is used at design time. Our Event Portal and these schema registries are truly different tools and completely complementary.”
IBM’s Cloud Pak for Integration version 2021.1.1 gets a bit closer to our vision for an event portal, as it includes functionality like the ability to “socialize your Kafka event sources” and “create an AsyncAPI document to describe your application that produces events to Kafka topics, and publish it to a developer portal.” But it all seems to be focused on Kafka, and these features are insular to IBM ecosystems/infrastructure.
Lenses.io is perhaps the closest to the value proposition of PubSub+ Event Portal in that it provides tooling to view data lineage flows and microservices so you can “drill down into applications or topics to view partitioning, replication health, or consumer lag.” It also includes a data catalog and a “self-serve administration and governance portal” where you can “view and evolve schemas.” But this toolset is focused on Kafka and Kafka distributions (such as MSK); it’s similar to Confluent Control Center but with more capabilities. It feels more like a data operations tool for accessing and leveraging data streams for things like event stream processing.
Given the purpose, capabilities and our vision for PubSub+ Event Portal, and in light of the alternatives currently available, I have PubSub+ Event Portal leading along the x-axis. I don’t have it on the far right of the x-axis because the full vision of the product has yet to be realized.
Now for the y-axis.
I have Solace PubSub+ Event Broker leading on the y-axis, so let me start by explaining why I have all the free and open-source technologies (yes, even Kafka) below their commercial alternatives on the enterprise-grade event streaming spectrum.
Mainly because few (if any) of the features I listed above as important for enterprise-grade event streaming come out-of-the-box (OOTB).
You have to manually build them in. In some cases, you’re aided by great documentation and a vibrant developer community. In other cases, not so much. And the vibrancy of that community and the quality of that documentation can change over time. For sure the people driving and overseeing them will.
All that manual DIY work is time consuming and often brittle. And if the DIY team leaves, where does that leave you?
It has also been our experience that what starts out as a very efficient and manageable open-source initiative can quickly turn into the exact opposite as the project grows in scale and complexity, at which point you end up paying a commercial service provider to help. For enterprise-scale event-driven architecture, it’s often better to think ahead and buy a product that has the features you need now and may need in the future.
Another point here relates to the perception that open-source gives you more freedom, flexibility, and agility. In many ways, that’s true. But in one important way, it’s not. Some open-source technologies are based on their own unique protocols and APIs, so it can take a lot of work to get it to work well with other technologies. It also becomes increasingly difficult to switch out that open-source technology as your deployment grows and becomes more complex and more entrenched.
I appreciate and see the value in open-source technologies for many enterprise projects. With respect to specific event streaming use cases like streaming data analytics or event stream processing, if you have the development talent and resources, maybe it makes sense to go the open-source route. But for enterprises looking to adopt and leverage event-driven architecture at scale, or use event-driven architecture as the foundation for many modern use cases such as event-driven integration, microservices, IoT and analytics, I don’t think the DIY/open-source route makes as much sense.
So those are the reasons I have open-source below the commercial products for enterprise-grade event streaming.
I will now explain the relative positioning of the commercial event streaming platforms and technologies in the top left quadrant.
You will recall the capabilities that I’m associating with enterprise-grade event streaming — those that enable an enterprise to stream events across hybrid, multi-cloud, and IoT environments, and do so quickly, efficiently, reliably and securely.
So I did some thinking, spoke to our sales engineers, and came up with criteria to evaluate technologies along these lines. Then I reviewed our internal competitive intelligence and did some desk research (product and solution web content and documentation) to fill knowledge gaps. Then I spoke to our CTO team and three industry analysts from different firms. Then I circled back with specific technology leaders on our team to revise and hone our thinking.
This table is the result of that process:
Is it 100% correct? Probably not. Is it 80% correct? I’m confident it is in that ballpark.
The point is really to give you a sense of the relative capabilities of event streaming technologies that can support enterprise event-driven architecture. I’ve tried to do that as honestly and objectively as possible. That said, if you’re deep into evaluating these technologies, you’ll have questions.
You may even have a raised eyebrow if something felt off to you based on your own research or experience. I understand that the best way to evaluate products is based on the functional requirements of the use cases(s) you intend to solve. In this evaluation, I looked at a wide range of use cases that I think are/will be relevant to the modern enterprise.
All that said, I’ve done my best to forecast your questions and I’ve answered/provided my rationale below.
Still have questions? Consider contacting us directly. We would love to have a more in-depth conversation. I’d also encourage you to ask an analyst for their perspective.
To bring this home, we are seeing event-driven architecture become a mainstream architectural pattern. It is being leveraged by enterprises across all major industries to underpin modern technology patterns and use cases like event-driven integration, microservices, IoT, streaming analytics and more. It is being applied to improve a variety of business processes like:
And many more.
We believe the enterprises that will lead their markets in the years to come will have a strong digital backbone for enabling event-driven architecture across their distributed enterprise so they can harness the full power and potential of events to maximize operational efficiencies, increase agility and accelerate innovation.
We believe a strong digital backbone for event-driven architecture includes two core elements:
And we’re confident that Solace PubSub+ Event Broker and PubSub+ Event Portal represent best-in-class offerings for these two core capabilities. But here’s an important point that I’ve saved for the end.
Being able to leverage these core products/capabilities together, in one platform, can deliver additional (and incredibly significant) benefits. This was really emphasized by Jesse Menning, an architect in our CTO’s office, who upon review of this paper (pre-conclusion section) said:
At this point, we need to emphasize that the Solace PubSub+ Platform is better than the sum of its (powerful) parts. To me, the full value of EDA isn’t realized with component-by-component architecture, even if you have great components. With Solace, discovering events on the event broker feeds crucial information into Event Portal, Event Portal pushes runtime configuration to the broker, while Insights gives you info across the enterprise. That cohesion of vision creates a virtuous cycle and sets Solace apart.”
Of course, I totally agree. There are so many synergies, so many time efficiencies, insights and quality-enhancing processes that exist at the intersection of these products. Especially when they are designed to work together. And that’s the case with PubSub+ Event Broker and PubSub+ Event Portal, the two core components of PubSub+ Platform.
These are all the reasons why I can truly and confidently say that Solace helps enterprises adopt, manage, and leverage event-driven architecture, and we do it better than any company in the world.