The world is more reliant on digital technology than ever, so it is of paramount importance for businesses to implement a digital transformation strategy that enables real-time responsiveness and data management if they want to survive and grow. Customers have become accustomed to obtaining value and instant solutions whenever they want, so if forced to wait they will take their business elsewhere.

The following scenarios and products are no longer considered revolutionary or disruptive; they are now the norm:

  • Ride share and delivery platforms with real-time location updates.
  • Real-time collaboration and multi-user editing platforms.
  • Fintech apps with instant fund transfer capabilities.
  • Smart home IoT devices and voice assistants ready to respond to commands/questions.
  • Smart elevators providing real-time contextual information to users and maintenance contractors.
  • Real-time airport ground operations optimization.
  • Online order management systems that are responsive, even under peak demand situations.
  • Recommendation systems that continuously become better by ingesting various streams of data.

Analysts believe that event-driven architecture (EDA) is a foundational element of digital business. By 2022, over 50% of companies will participate in an event-driven digital ecosystem. We are observing this trend as an increasing number of our customers adopt event-driven architecture as a digital transformation business strategy.

The Key Elements of a Successful EDA Business Strategy

Deploying event-driven architecture as part of your business’ digital transformations strategy can be a complex undertaking, especially for large organizations. Sumeet Puri, Solace’s chief technology solutions officer, has developed a step-by-step guide to implementing EDA that offers a proven way to event-enable existing systems, modernize your platform to support streaming, and get buy-in from key stakeholders.

There are several event-driven considerations an organization should keep in mind for a successful digital transformation strategy, three of which I’ll focus on in this blog post: application development agility, event mesh, and health monitoring.

1. Application Development Agility

Application development agility is important because IT departments need to be able to architect, deploy, and manage software applications quickly and consistently. Defining your event taxonomy and standardizing your schemas play a significant role in your developers’ experience and, therefore, time to market. These are the elements that require focus in this area:

  • Architecture consistency and clarity
  • Software development lifecycle management
  • Cross-domain communication
  • Event taxonomy
  • Data governance and standardization

2. Event Mesh

Event mesh is an architectural layer that liberates data trapped in legacy applications and streams information between applications across locations, lines of business, and operating environments including clouds and on-premises datacenters.

Such a transformational layer should not lock you in to a vendor, technology, or protocol, instead it should enable your teams to innovate by using best-of-breed technology. An event mesh is:

  • Portable
  • Flexible
  • Intelligent
  • Dynamic
  • Manageable
  • Scalable
  • Secure
  • Reliable

3. Health Monitoring

Observing and monitoring the health of your event-driven infrastructure and applications is crucial. Once event-driven architecture is implemented, events become the lifeblood of your business. Consequently, you need to have a strategy for keeping your finger on the pulse in the form of:

  • Metrics monitoring
  • Capacity planning
  • Traceability and lineage
  • Application and business health metrics

A Technological Roadmap A Roadmap for EDA Success

The three strategic aspects of your event-driven digital transformation discussed in the previous section will only get you so far, you need the technology and the tools to implement and act on this strategy. Addressing all of the requirements outlined above can be a massive undertaking, especially if you’re planning on making this a DIY project.

If you were to build a platform incorporating the key elements of application development agility, event mesh, and health monitoring using open-source or commercially-available components, you would need to:

  1. Develop the subject matter expertise
  2. Design the platform
  3. Write a lot of code
  4. Monitor health and keep the platform operational
  5. Keep up with the maintenance and the application team’s requirements

Realistically, that means focusing a development and DevOps team on building and maintaining the platform perpetually. Chances are this would not be your business’s core competency and not where you would want your developers concentrating their efforts.

What about AWS, Azure, and GCP?

Of course, you have the option of using a cloud provider’s managed services, such as AWS SQS/SNS, Azure Event Hubs, or GCP’s Pub/Sub, but while these managed services can address basic messaging requirements to some extent, they are not comprehensive event streaming platforms. They do not offer a hybrid event mesh architectural layer that would allow you to connect your on-premises apps to the cloud and use the best services (for you) in each cloud.

Naturally, it is in the cloud providers’ best interest to lock you into their offerings and keep your data and applications exclusively on their infrastructure. A comprehensive digital transformation strategy for EDA should be portable, however, meaning it should be able to work across your datacenters, on-premises cloud (e.g., Kubernetes), or any public cloud provider to give you flexibility, leverage, and a competitive edge.

The Technological Requirements for Successful EDA Transformation

Like most things in IT, there is no one-size-fits-all approach. But there are several common components you’ll need to make the most of your digital transformation and make working with EDA easier. These include:

Diving into each a little deeper…

An event broker should be:

  • High performance with high throughput
  • Flexible enough to address various architectural patterns, including queuing and publish/subscribe
  • Highly reliable and guarantee event delivery
  • Highly secure, independent of where it’s deployed

An event broker should support:

  • Dynamic and intelligent event routing across a network of connected brokers (an event mesh)
  • Various open protocols and APIs
  • REST-based interfaces
  • Connectivity to various cloud services, data stores, and legacy apps
  • Delivery of events in the expected order
  • High availability and disaster recovery

The management component of your platform should allow you to deploy the event brokers anywhere (any cloud provider or on-premises) and perform lifecycle management (upgrade, patch management, scale-up and out) on your system. Ideally, it would be easy to use and would be a self-serve GUI with programmable APIs for integration with various SDLC and operational tools.

A distributed monitoring component would include capabilities such as:

  • The ability to observe the critical metrics you need for:
    • Infrastructure capacity planning & uptime
    • Application and business health
  • Log retention and rotation
  • Delivery of alerts and notifications, as well as integration with operational on-call tools
  • Integration with your standard corporate monitoring system

Since the above components only address your infrastructural and observability elements, you now need to consider your application and data architecture, governance, and life cycle concerns. Simply put, you need a tool that plays the same role as API portals for the REST-based APIs ecosystem. An event portal is a tool for architects and developers that includes:

  • A visual designer to illustrate a comprehensive view of the event-driven architecture that reflects the flow of the events in each (and between) various domains
  • A searchable catalog of standard events and payload schemas
  • A clear view of the taxonomy or hierarchy of events within various domains
  • A periodic comparison of the runtime environment and the designed architecture (i.e., audit)
  • Version control and access control on the objects
  • Code generation and integration with everyday tools via APIs

PubSub+ Platform: An All-in-One Approach to Your Digital Transformation Strategy with EDA

To recap, digital transformation – focused on the ability to respond to changing business conditions and customer demand – is necessary for maintaining growth and profitability. Event-driven architecture is a key driver of digital transformation, but assembling the processes and tools for a successful EDA implementation requires domain expertise and a comprehensive toolset.

To address this pressing market need, Solace has developed PubSub+ Platform, a complete event streaming and management platform for the real-time enterprise. It helps enterprises design, deploy, manage and monitor event-driven architectures across hybrid cloud, multi-cloud and IoT environments, so they can be more integrated and event-driven.

So if you want to accelerate EDA adoption within your organization to better drive digital transformation and ensure long-term success for your business, I encourage you to get in touch with us today to learn more about the PubSub+ Platform or request a demo.

Ali Pourshahid
Ali Pourshahid
Chief Engineering Officer

Ali Pourshahid is Solace's Chief Engineering Officer, leading the engineering teams at Solace. Ali is responsible for the delivery and operation of Software and Cloud services at Solace. He leads a team of incredibly talented engineers, architects, and User Experience designers in this endeavor. Since joining, he's been a significant force behind the PS+ Cloud Platform, Event Portal, and Insights products. He also played an essential role in evolving Solace's engineering methods, processes, and technology direction.
Before Solace, Ali worked at IBM and Klipfolio, building engineering teams and bringing several enterprise and Cloud-native SaaS products to the market. He enjoys system design, building teams, refining processes, and focusing on great developer and product experiences. He has extensive experience in building agile product-led teams.
Ali earned his Ph.D. in Computer Science from the University of Ottawa, where he researched and developed ways to improve processes automatically. He has several cited publications and patents and was recognized a Master Inventor at IBM.