In this Post

    Event-driven architecture (EDA) is a software design pattern in which decoupled applications can asynchronously publish and subscribe to events via an event broker, establishing a real-time flow of information between applications, microservices, and devices across environments, geographies, and organizations.

    If you’re new to EDA you can learn more about the fundamental concepts and terms here, but for the sake of this article you really just need to understand these three concepts:

    • Events can be:
      1. Changes of state…really anything that can be noticed and recorded by an application or device.
      2. Discrete signals that indicate that change of state to other components of a system.
    • Payloads are the structured data that capture information required to process or act upon the event.
    • Schemas ensure consistency and reliability in processing these payloads by defining the payload’s structure, format, and data types – a blueprint that governs how events are communicated and understood across different components.

    Together, events, payloads, and schemas form the backbone of how components in an event-driven system interact and share information, enabling seamless integration and reducing potential errors in data interpretation.

    Importance of Events, Payloads and Schemas

    In EDA, everything revolves around events, their associated payload and the underlying schemas that define them.

    As such, understanding events, payloads and schemas is critical for:

    • Building Scalable Systems: Properly structured events and schemas ensure efficient communication in systems that handle high volumes of data and users.
    • Achieving Decoupling: Events allow systems to work independently by decoupling producers and consumers, reducing interdependencies. Schemas facilitate this independence by providing a contract that both parties can rely on, reducing interdependencies further.
    • Enabling Real-Time Processing: Payloads provide the necessary context for consumers to respond to events instantly, whether for notifications, analytics, or further workflows. Schemas guarantee the consistency and reliability of this context, even in real-time scenarios.
    • Architecting for the Future: Designing events, payloads and schemas carefully ensures that the system remains modular, extensible, and maintainable as it scales. Mismanaged events or schemas can lead to tight coupling, inefficiency, and technical debt.
    • Enabling Effective Governance: Consistent standards for event naming, payload structure, and schema versioning enable better collaboration among teams, reduce errors, and improve system observability. Without governance, teams may struggle with issues like incompatible payloads, redundant events, or inefficient data flows.

    What This Article Covers

    With this post I want to help you understand the fundamental concepts of events, payloads and schemas so you can:

    • Differentiate between the types of events and when to use each.
    • Design payloads and schemas that are concise, efficient, and maintainable.
    • Address challenges like schema evolution and payload optimization in event-driven systems.
    • Establish architectural and governance best practices to ensure sustainable system growth.

    Types of Events and When to Use Them

    An event as mentioned before is as a record of a state change or an occurrence that is important for a system or its components. Events are produced by event publishers (e.g., an application, service, or external system) and consumed by event subscribers or processors.

    Each event normally contains some data called payload which might be required by the consumers of these events to process these events. There is a wide range of event types that have been described by architects and specialists with their own point of view and while most of them overlap with each other, there are some niche or subtypes that can also be derived from the overall ones.

    Selecting the appropriate event type is critical for the effectiveness and scalability of an event-driven system.
    The decision largely depends on several factors like:

    • The system’s purpose
    • The required level of data decoupling between systems
    • The data requirements of the consumers
    • The frequency and volume of the events
    • The capacity of data sources to serve data

    Let’s first start looking at the main types of events: notification, state transfer,

    Notification Events

    Notification events are often referred to as thin events. They contain minimal payload information like event type, identifier, timestamp etc. typically just enough to inform consumers that something happened.

    Key Characteristics

    • Light payload, reduces producer-consumer payload contract
    • The thin payload reduces compatibility concerns when schemas evolve
    • While lightweight, they increase dependency/complexity on the consumer side for context

    When to Use Notification Events

    Notification events are lightweight signals that inform subscribers about an event occurrence without providing the full state or additional details. They are simple to implement but can lead to inefficiencies if consumers frequently retrieve redundant or unnecessary data. This pattern is best suited for systems with predictable data access patterns.

    • System’s Purpose: Notification events are used when systems only need to be informed about an occurrence, such as triggering workflows, notifying users of status changes, or updating cache invalidations. For example, a notification event could signal that “a document was updated,” leaving consumers to fetch the full document if needed.
    • Required Level of Decoupling: Notification events provide high decoupling. Since the notification does not include data, there are no dependencies on specific data formats or schemas. The producer emits the event, and the consumer determines the appropriate action independently.
    • Consumers’ Data Needs: Data needs are minimal. Consumers fetch additional information as required, typically through APIs or querying databases. This approach is effective for systems with reliable, self-sufficient data retrieval mechanisms.
    • Frequency and Volume: Notification events work well in high-frequency scenarios, such as real-time alerts or updates. Their lightweight nature minimizes the burden on event streams. However, a high volume of notifications may overwhelm downstream consumers, especially if many queries are generated for additional data.
    • Capacity of Sources to Serve Data: While producers offload heavy data generation, data sources must handle the increased load from consumer queries. Systems with optimized APIs and caching mechanisms are better suited to supporting this pattern at scale.
    • Trade-Offs and Limitations: Notification events may lead to inefficiencies if consumers repeatedly fetch redundant data or rely too heavily on the producer for updates. They are not ideal for use cases where consumers need detailed contextual data within the event itself.

    State Transfer Events

    These events embed the full state or relevant context of the triggering entity in the event payload, providing all necessary information to the consumer. This type is typically associated with the Event-Carried State Transfer (ECST) pattern

    Key Characteristics

    • Allow decoupled consumers which act independently, avoiding additional queries.
    • Tightens the producer-consumer schema contract, as consumers depend on the exact structure of the event payload.
    • In case of larger payloads and high frequency of events, it might lead to increased network traffic

    When to Use State Transfer Events

    State transfer events include the entire state (or a substantial portion) in the event payload, making the event self-sufficient. This approach promotes consumer independence but can increase payload sizes and storage costs. This payload type is typically associated with the ECST pattern.

    • System’s Purpose: This type is ideal when consumers require complete, self-contained data in each event, such as in data pipelines, ETL systems, or data replication processes. For example, a “UserUpdated” event could carry the entire user profile to allow consumers to process it independently.
    • Required Level of Decoupling: State transfer events provide strong decoupling between producers and consumers. Consumers can process the event without relying on external queries or API calls, which reduces external dependencies and increases system reliability.
    • Consumers’ Data Needs: Data needs are high. Since the full state is included, consumers have all the necessary information in the event. This makes it suitable for batch processing systems or scenarios where events are stored for long-term analysis.
    • Frequency and Volume: This type is suited for low-to-moderate frequency use cases due to its heavier payload sizes. Large, infrequent updates, such as a daily export of product inventory, are better suited for this pattern.
    • Capacity of Sources to Serve Data: Producing complete state snapshots can be resource-intensive, requiring efficient data export or caching mechanisms. Systems generating frequent state transfers must ensure scalability and avoid bottlenecks.
    • Trade-Offs and Limitations: While it minimizes consumer dependencies, state transfer events can lead to data duplication across the system, increasing storage and bandwidth usage. Proper data governance and version control are critical for maintaining system scalability.

    Delta Events

    Delta (i.e. difference) events contain only the attributes that have changed since the last state update rather than its entire representation.

    Key Characteristics

    • Efficient for communicating incremental updates.
    • Reduces complexity in consumers needing to figure out what has changed.

    When to Use Delta Events

    Delta events capture and propagate incremental changes, providing only the minimal information about what has changed. While they optimize bandwidth usage, they require consumers to manage state reconstruction and ordering complexities.

    • System’s Purpose: Delta events are used for systems requiring high precision in propagating incremental changes, such as real-time analytics or financial systems. For instance, they are suitable for updating stock prices, where only the change (e.g., +$0.50) is transmitted.
    • Required Level of Decoupling: Decoupling is moderate, as consumers must understand the sequence of delta events to reconstruct the full state. This requires tight coordination between producers and consumers to ensure consistency.
    • Consumers’ Data Needs: Consumers need prior context to interpret delta events. This approach is unsuitable if consumers frequently require full data snapshots, as it relies on the ability to incrementally build state.
    • Frequency and Volume: Delta events are well-suited for high-frequency scenarios, such as streaming sensor data or collaborative editing platforms. Their lightweight nature reduces network load.
    • Capacity of Sources to Serve Data: Efficient delta generation requires systems capable of calculating changes accurately, often leveraging diff algorithms or real-time tracking mechanisms.
    • Trade-Offs and Limitations: Delta events introduce complexities such as handling event ordering, dealing with version mismatches, and recovering from missed or corrupted events. They are less resilient in scenarios where guaranteed delivery or state accuracy is critical.

    Domain or Integration Events

    Domain events and integration events, represent significant changes or actions either within a specific domain or across multiple integrated systems. They facilitate communication and coordination across multiple application domains or lines of businesses.

    Key Characteristics

    • Often tied to business logic or workflows.
    • Can span multiple microservices or domains, acting as a bridge between them.
    • They can increase cross-domain dependencies which may require robust event governance.

    When to Use Domain/Integration Events

    Domain or integration events represent meaningful occurrences within a business domain, modeled around domain-driven design (DDD) principles. They encapsulate high-level semantic information about what happened.

    • System’s Purpose: These events are best for capturing business-level activities, such as “OrderPlaced” or “PaymentProcessed.” They trigger workflows, enable inter-service communication, and align well with domain-driven design practices.
    • Required Level of Decoupling: Decoupling is moderate, as producers and consumers share a common understanding of the domain model. Consumers do not rely on producer-specific implementations, fostering clear communication and clean contracts between systems.
    • Consumers’ Data Needs: Data needs are moderate to high. Domain events often include relevant metadata and context, but consumers may need to enrich the data by querying other services. For example, an “OrderPlaced” event might include the order ID and customer information but leave additional details to be fetched.
    • Frequency and Volume: Domain events typically operate in low-frequency scenarios compared to delta or notification events. However, they are vital for orchestrating reliable workflows in systems like e-commerce or banking.
    • Capacity of Sources to Serve Data: Domain events are lightweight on data sources since they carry summary-level information. Producers must ensure events contain enough context to enable consumer actions without excessive queries.
    • Trade-Offs and Limitations: Domain events require robust schema management to maintain compatibility as systems evolve. Implementing event versioning and backward compatibility practices is essential to ensure long-term stability.

    Other Types of Events

    Beyond the main categories, certain niche or subtypes of events emerge in specialized use cases:

    • Summary events typically aggregate multiple events into a summarized payload which is useful for batch processing or analytics workflows. They are not real-time but used for periodic or scheduled processing.
    • Tombstone events are a mechanism in Apache Kafka for marking records for deletion in log-compacted topics. They act as a signal to the downstream consumers.

    When is an event not an event?

    Event design in EDA can be quite nuanced. Just because something has been designated an “event”, does not mean it will help achieve loose coupling and other associated benefits of EDA.

    A ‘command’ is a 1-to-1 communication between a sender and a specific receiver, intended to invoke a process or action in another application (e.g., publishing an event). In contrast, an event represents something that has already occurred and can be consumed by multiple subscribers.

    Why is it important to not mislabel commands as events?

    • Loss of Intent Clarity: Commands have imperative intent, while events convey descriptive context. Mislabeling blurs this boundary and complicates system understanding.
    • Tight Coupling: Commands introduce dependencies, as they demand a specific action from the consumer. Events on the other hands, allow consumers to react independently based on their individual business requirements.
    • Scalability Impacts: When commands are mislabelled as events, the event stream can become cluttered with action-specific messages. This reduces its reusability and undermines scalability by increasing message processing overhead.

    How to avoid this mistake

    • Define Clear Naming Conventions: Events should describe past-tense occurrences (e.g., ‘OrderShipped’), while commands should use imperative language to request actions (e.g., ‘ShipOrder’).
    • Audit Payloads for Intent: Examine event payloads to ensure they are purely descriptive and do not impose behavior on consumers. Prescriptive actions should be delegated to separate command messages to maintain decoupling.
    • Implement Governance: Implement governance processes, such as architectural reviews, to ensure that messages adhere to their intended design principles and roles within the system.

    Designing Effective Payloads

    Designing event payload schemas that are concise, efficient, and maintainable is a crucial aspect of event-driven systems. A well-structured schema ensures seamless communication between components while minimizing complexity and resource consumption.

    In this section I will describe some principles and practices that will help you create effective payload schemas.

    Key Principles of Payload Design

    Efficient payload schema design balances minimalism with functionality. It ensures that payloads carry only the necessary data while being flexible enough to accommodate future needs.

    Conciseness

    • Include only the data required for the consumer to process the event.
    • Avoid sending redundant or irrelevant information.

    Efficiency

    • Use lightweight formats such as JSON, Avro, or Protobuf to reduce payload size and transmission costs.
    • Optimize payloads for serialization and deserialization performance.

    Maintainability

    • Use consistent naming conventions and data structures across all payload schemas.
    • Ensure the schema is self-explanatory and well-documented.

    Payload Validation and Error Handling

    Ensuring the integrity and validity of the payload is essential for maintaining system stability.

    Schema Validation

    • Use schema definition languages like JSON Schema, Avro, or Protobuf to define and validate payload structures.
    • Enforce rules for mandatory fields, data types, and value ranges.

    Handling Malformed Payloads

    • Detection: Implement validation checks at the producer and consumer ends.
    • Recovery: Use retry mechanisms or send error notifications to producers.
    • Logging: Log invalid payloads for debugging and analysis.

    Designing for Schema Evolution

    Changes in payload schemas are inevitable as systems evolve, but poorly managed schema evolution can break compatibility between producers and consumers.

    Versioning

    • Include a version field in every payload to allow backward and forward compatibility.

    Deprecation Strategies

    • Avoid breaking changes by supporting older schemas for a transition period.
    • Communicate schema updates clearly to all stakeholders.

    Best Practices

    • Use additive changes (e.g., adding new optional fields) rather than breaking changes (e.g., renaming fields).
    • Test changes in staging environments before deployment.

    Establishing Best Practices for Architecture and Governance

    Adopting EDA is a strategy, not a one-time project. Enterprises with advanced EDA maturity identify that EDA meets or exceeds expectations, so it’s important to stay focused on the journey. It requires adhering to best practices in design, management, and governance. These practices ensure scalability, maintainability, and long-term system health.

    Event Naming and Categorization

    Consistency in naming and organizing events reduces confusion and simplifies integration.

    Standard Naming Conventions

    • Use a consistent format, such as <Entity><Action> (e.g., OrderCreated, UserRegistered).
    • Avoid overly generic names like Event1.

    Categorization

    • Group related events logically for clarity and easier maintenance.

    Governance for System Growth

    • As systems evolve, governance ensures that changes do not disrupt existing functionality.

    Version Control for Events and Schemas

    • Maintain a central catalog for event schemas, including version history.
    • Automate schema validation and enforcement during CI/CD pipelines.

    Ownership and Responsibility

    • Assign clear ownership for each event, including responsibility for its schema and lifecycle.
    • Example: The order service team owns all Order related events and ensures compatibility.

    Monitoring and Observability

    • Use tools like Prometheus, Grafana, or OpenTelemetry to monitor event flows and system health.
    • Track metrics like event throughput, latency, and failure rates.

    Event Lifecycle Management

    • Managing events throughout their lifecycle prevents unnecessary clutter and ensures system efficiency.

    Deprecation and Archiving

    • Retire obsolete events gracefully by notifying consumers and providing transition timelines.
    • Archive events no longer needed for active processing.

    Summary

    This blog explores foundational concepts, event types, and best practices for designing efficient payload schemas and implementing governance strategies in Event-Driven Architecture (EDA).

    Choosing the right event type is essential and depends on factors such as the system’s purpose, the desired level of decoupling, consumer data needs, event frequency and volume, and the capacity of data sources to handle requests. Developers and architects must also clearly distinguish between commands (imperative actions) and events (descriptive occurrences) to avoid design pitfalls that can lead to tight coupling and reduced scalability.

    Equally important is designing well-structured, efficient payload schemas and establishing robust governance principles. These include implementing version control, lifecycle management, and schema evolution strategies to maintain system flexibility and long-term reliability.

    As EDA continues to evolve, tools like Solace PubSub+ Event Portal offer comprehensive support for designing, cataloging, governing, and managing EDA artifacts. For example, enterprises can use such platforms to ensure consistency in event schemas, streamline cross-team collaboration, and accelerate their EDA adoption journey. By leveraging these tools, organizations can effectively scale their systems while maintaining architectural integrity.

    Hari Rangarajan
    Hari Rangajaran
    Developer Advocate, Office of the CTO

    Hari is a dynamic, creative, and innovative professional who is always looking for a challenge and understand different software development processes from both technical and business perspectives.
    His main background is Software Engineering in Java, Microservices and EDA. DevOps and Agile is more about who he is and how he does things and cooperates with people.
    Hari’s current focus is on evaluating emerging technologies, practices, frameworks and identifying how they can best serve the long-term interests and ambitions of my clients.
    He is passionate about how programming can be a key participant in sustainability discussions, identifying points for consideration while making technology choices from a green perspective.