One of the real pleasures of my job is working with customers to understand their use cases, design decisions and architecture. Recently I’ve been having a lot of conversations about how to maximize microservices so that they communicate with each other – it’s a fascinating process that usually starts with REST and goes down some interesting roads when real-world requirements and constraints come into play.
For example, one of my customers was leveraging a microservices architecture to implement persistent request-response style interactions from their customer-facing user interface (UI) into their CRM/ERP system and database layer. It sounds simple: a typical enterprise integration pattern for intra-process communications, but, of course, these things never are.
In this case their UI used REST, the CRM/ERP system relied on a PHP module to interface with the outside world using a Webhook-style integration, and between them sat a validation and enrichment module that had been written in Node.js.
Mixing REST, messaging and Webhooks to maximize microservices could be a little tricky, but in this case they used Solace’s REST capabilities to simplify this integration. The UI sent a REST request to the validation and enrichment service, which passed the request on to the CRM/ERP system.
REST vs Messaging
The obvious question in such a simple use case is “Why use messaging instead of point to point REST?”
My colleague Jonathan Schabowsky did a great job explaining why, but I’ll summarize in my own words:
- Your system may be simple now, but it will get more complicated. Guaranteed! You’ll need to add new features, new services and new reporting and auditing capabilities;
- Messaging gives you run-time decoupling, which means if the receiving service ever fails, the messaging system can hold on to those requests and deliver them to the receiving service once it recovers;
- Messaging gives you service location decoupling, so applications don’t need to know where the other systems they’re communicating with reside. The message router is responsible for understanding where the services are, and routing messages between them;
- Messaging gives you load balancing and fault tolerance without the need for separate load balancers, HA proxies, and other bits of architectural chaff;
- Message queuing allows you to break synchronous, serial paths into asynchronous, parallel paths, reducing fragility and increasing responsiveness;
All of this makes it much easier to add a new service that listens to existing data streams – you simply attach a subscriber! If you need persistence, add a queue and subscribe the queue to the topic. The existing stream doesn’t even need to know the new service is using the data: true application decoupling.
Keeping Your Microservices Architecture Simple
If you were to use a typical messaging system, perhaps using AMQP, you might end up with an architecture like this:
This gives you the messaging advantages we discussed, and uses open APIs, which is good, but at the cost of complexity. In addition, we have a serial path from the UI through the node enrichment layer to the broker.
Serial paths are bad because they add fragility and risk – if just one microservice breaks or slows down, the whole path and customer-facing process is affected.
RESTing on your inputs
With a message router that accepts REST directly, you can make the incoming path event-driven, enabling greater decoupling and flexibility.
Since Solace does exactly that, all you have to do is define the REST endpoint at a messaging level:
message-vpn <vpn> service rest incoming listen-port <port number>
Then point the UI to this endpoint.
http://<broker IP>/a/topic/hierarchy
This instructs Solace to translate the REST request to a message on a/topic/hierarchy, which implies a direct class-of-service. To request guaranteed messaging, simply add the http header:
Solace-Delivery-Mode: Persistent
To tell the message router that we are expecting a response, use:
Solace-Reply-Wait-Time-In-ms : ….ms
a synchronous request/response using the default reply-to topic
Solace-Reply-To-Destination :
an asynchronous request/response using a specified user reply-to topic
Full details can be found in our REST documentation.
Now we have a much more microservice-like, decoupled input path, which tolerates the failure of an individual service. Should more validation or enrichment need to be done, it is much easier to add using this approach than the serial approach before.
The real clincher for adopting this approach for this customer was that it will make it easy to further decompose the validation/enrichment service if they ever need, to accommodate higher volumes or boost performance, for example.
Another benefit was that since Solace’s Javascript and Node.js libraries support guaranteed messaging and perform translation between protocols, they were able to mix and match protocols – REST in, Javascript out to the node application.
Out path webhook
Next they needed to tackle the output path. They needed to read from the queue holding the requests and convert them to a REST request. That requires something to read from the queue, hold the message, issue the REST request, wait for an OK, and acknowledge this back to the message router. The customer had a .Net application to do this, backed by REDIS just in case this application failed.
But that’s a service that doesn’t add any value, plus another serial path and another layer of persistence. This webhook service started out as about 100 lines of code, but had ballooned to a few thousand lines and they already had some more functionality they were looking to add.
Fortunately, there is a much better approach. In addition to incoming REST messages, Solace also supports outgoing REST messages via POST in the traditional Webhook style. Ordering is preserved since messages are being read from a queue, and the existence of multiple REST endpoints bound to the queue provides built-in load balancing. Once I’d explained this to the customer there was a genuine sense of relief in the room: a whole component could be eliminated, and the architecture simplified.
REST Delivery Point
To deliver messages via REST, Solace encapsulates the necessary mechanisms in to a broker administered object called a Rest Delivery Point, or RDP. The RDP is essentially a messaging webhook. It holds two types of objects:
- A queue binding. This reads messages from a queue and associates them with some REST specific attributes such as the target endpoint, such as /endpoint/some/verb.;
- A REST consumer. This creates http connections to your REST endpoint, so it’s here that you fill in the details such as address, the number of simultaneous connections to be made etc.
Multiple rest consumers can connect to multiple queue bindings and vice-versa, giving you the flexibility to create any kind of load balancing or multiple targets as you like.
To create an RDP
message-vpn <vpn> rest create rest-delivery-point <RDP name>
Add a queue binding
create queue-binding <queue> post-request-target <endpoint target, e.g. /target/endpoint>
And set up the consumer
create rest-consumer <name> remote host <address> remote port <port number>
Here’s what the this looks like:
Introducing a New Requirement
Remember the first point I made about the benefit of using messaging instead of point to point REST? I said your system would get more complicated. Don’t know if you believed me or not, but in this case, the customer needed to add a new service to provide audit capabilities on the transactions that have been processed. Hate to say I told ya so, but I will.
They also knew they’d eventually need to capture every single event and persist them all to a database. Those who’ve struggled with MIFID II will find this familiar. With a REST point-to-point architecture, they would have had to create the audit service and modify all their senders to create two copies of every event, sending one to the original destination and one to the audit service.
This is another place where the power of messaging comes in to play. When using messaging you don’t need to make any changes to the publisher, and existing receiving services don’t need to know anything about the change, either. All they need to do is create a new queue for the audit application to read from.
And how do they make sure the audit queue gets all the messages it needs to? Simple: map that queue to the topics:
We have, however, missed something. We know the audit queue needs to hold a copy of all events, and we’ve added listeners for the events we know about. But what happens when a new service is created? We’d have to remember to add a subscription to the audit queue.
With messaging, you can avoid this requirement through the use of subscription wildcards. Since our audit application must listen to everything, we simply subscribe to everything:
message-spool message-vpn audit queue audit_queue subscription topic >
This generated another sigh of relief in the room, as the new audit requirement didn’t seem so onerous anymore –write the application, add the required queue and subscription, call it a day. No matter how the message flows change in the future, the audit application will continue to work without any code or configuration changes.
REST easy, REST assured!
By understanding the power of messaging this customer was able to dramatically simplify their architecture while increasing the reliability, robustness, and responsiveness of their platform. And they’ve made their system much more flexible, as it’s now trivial to add new services…from a communications point of view, anyway.
Solace’s REST delivery point (RDP) provides a messaging Webhook, allowing easy integration with legacy and 3rd-party applications, and we removed from their system the following components/requirements:
- HA Proxy for load balancing REST services
- REDIS for persistence in the webbook service
- Webhook service
- Service discovery mechanism
That’s quite some saving for an architecture that started with just 8 components!
As a last point: whenever you find an integration or data movement problem, it’s worth picking up the phone and talking to us. You never know how much time and effort you might save!
***
Continue learning with our latest guide on event-driven microservices:
Explore other posts from categories: DevOps | For Developers