For years, the shopping and buying experience has been accelerating thanks to the advent of online and mobile shopping. Consumers expect to be able to research products at home and on the go; compare features, prices, and ways they can buy; and order the widget they want the moment they decide to buy. For many, the instant gratification of buying in store and walking away with product is now satisfied by the “buy now” click on their computer or phone, and omnichannel experiences like BOPIS and BORIS (fun terms that mean buying online and either picking up or returning in store) which are becoming popular.
Since anybody anywhere can buy a given product, demand can surge so fast that retailers struggle to keep up. One challenge they face in such times is maintaining an accurate and up-to-the-moment picture of distributed inventory that’s being depleted by in-store and online purchases. If they can’t, they might sell product they don’t have, or – maybe worse? – not sell product they actually do have! My colleague Sandra Thomson talked about this in her blog post Retail in a Rush: Improving the Customer Experience.
“How many times have you heard an in-store employee say, “Our online inventory doesn’t always match our in-store inventory, sorry!” To take advantage of the consumer spending momentum close to major holidays, retailers need to take a phygital approach and work in advance to make sure: 1) Their physical and digital operations are integrated; 2) Real-time capabilities are prioritized; and 3) Systems are enabled to communicate seamlessly and instantly, no matter the system’s location (in the cloud, on-premises, etc.)”
The Unacceptable Risk of Losing Orders
I want to focus specifically on a source of one of these challenges, which lies in the fact that many retailers have adapted to the breakneck pace of modern retail by layering innovative new applications and cloud services on top of decades-old legacy applications and systems of record running on servers and mainframes. They’ve then tied them all together using integration and middleware tools like MoM (message-oriented middleware), ETL (extract, transform and load), ESBs (enterprise service bus) and RESTful APIs that rely on batch data exchange, point-to-point integrations, periodic polling, request/reply interactions, and synchronous communications.
Each of those legacy applications and integration tools that weren’t built to run in real-time introduces the risk that a given piece of information – say a customer order, a stockout alert, or a new sale price – doesn’t get where it needs to be, or gets to a system that isn’t able to deal with it, so it rejects the message. This risk is ever-present, but exacerbated when sudden bursts of orders hit, because legacy systems that can’t process orders as quickly as they come in (we call them “slow consumers”) either reject/return messages or crash, leading to a feedback loop that in turn overwhelms the integration platform.
The unacceptability of lost orders doesn’t just apply to retailers, but to manufacturers and wholesale distributors as well. As one of the world’s largest brewers, Heineken needs to manage large orders from resellers and retailers around the world.
Consider this chain of events: A large supermarket chain in New Zealand places orders through our B2B portal, the order is processed in the back-end with the payment information, the beer is delivered, the packing lists in the brewery have to roll out of the label printer, etc. Each of these activities are supported by multiple integrations between front- and back-end. My team makes sure the whole end-to-end process works right. Integration is the glue for all this — it’s what allows all of the processes and applications in the value chain to communicate.”
Guus Groeneweg, Product Owner for Digital Integration, Heineken
Eliminate the Risk of Lost Orders with an Event Mesh
The best way to ensure that messages are never lost is with event-driven architecture (EDA), specifically by routing information in an event-driven manner using an event mesh.
“An event mesh is an architecture layer that allows events from one application to be dynamically routed and received by any other application, no matter where these applications are deployed (no cloud, private cloud, public cloud). You create an event mesh by deploying and connecting event brokers across environments.” Shawn McAllister, CTO, Solace
To set the stage, one of the biggest advantages of EDA is the decoupling of applications. Instead of creating and maintaining direct connections between applications, you connect them all to event brokers that manage the distribution of information from where it’s produced to everywhere it needs to be. The applications, devices or users generating the information don’t need to know where it’s going, they just “publish” the data using topics that indicate what it’s about, and systems that have “subscribed” to that kind of information get it.
This is called publish/subscribe messaging, also known as “pub/sub,” and it’s the term from which PubSub+ Platform takes its name. The plus means lots of things – our platform supports other message exchange patterns like data streaming and request/reply, all of your favorite APIs and protocols; and multiple qualities of service like direct and guaranteed messaging, also known as persistent messaging.
The Architect’s Guide to Real-Time RetailHow retail IT professionals can architect for resilience and rapid response by leveraging event-driven architecture.Guaranteed Messaging is the Shock Absorber You Need
The basic idea with guaranteed messaging is that applications can publish information to the event mesh and know it will get where it needs to be, whether the journey takes 20 microseconds, 20 seconds, or 20 hours.
What kind of applications are we talking about? It’s easier to describe what applications aren’t best suited for guaranteed messaging: market data, some real-time gaming, and some sensor networks. Pretty much everything else is best done with guaranteed messaging.
When a customer places an order, for example – that’s an important event that requires you to process their payment, decrement inventory, and either let the customer walk out of the store or initiate the shipping process. You simply can’t lose orders!
So what happens when you experience a rush of orders, or sustained growth that outpaces your ability to scale your systems, driving a flow of orders to applications with different levels of performance? All these events must be delivered to the target application, without loss and at a rate each can deal with.With guaranteed messaging, when a system publishes information to an event broker, the broker acknowledges receipt and becomes responsible for ensuring delivery. To do so, it stores a copy of the message in what’s called “message spool” before it attempts delivery. This diagram illustrates the basic flow, and you can learn more about how guaranteed messaging works here.
Conclusion
Thanks to guaranteed messaging, an event mesh acts as a shock absorber that absorbs any amount of orders, temporarily stores them so you never lose one, and delivers them to downstream applications at a rate they can manage – even slow consumers – without putting back-pressure on those fast front office applications.
Explore other posts from category: Use Cases