In this edition of Solace Says, I interviewed Jonathan Schabowsky of Solace’s office of the CTO, who recently wrote a blog post about event-driven microservices that’s turned out to be quite popular. In that post, and this video, Jonathan describes the characteristics most microservices share, what they’re commonly used for, and how they share information with REST and/or messaging technology.
Jonathan: Yeah, so it’s actually kind of interesting. Microservices, in and of itself, it doesn’t really have an industry accepted definition. So in my opinion in sort of doing a lot of research in my own experience; there’s four sort of key central tenants to what an actual microservice is. First of all they generally need to be small in size and specifically single in purpose. They need to communicate using technology-agnostic protocols. Thirdly, be independently deployable, and fourth sort of tied into that, released via automated processes.
Dave: Obviously, and to your second point, these microservices all need to communicate together. In the blog you mention a quote from Martin Fowler about smart endpoints and dumb pipes. Could you expand a little bit on what that actually means?
Jonathan: Yeah, so it’s really a central tenant of microservices that you shouldn’t have unneeded logic that’s actually occurring in the communications tier of your applications. So with that, what we’re really specifically talking about there is that traditionally in SOA systems enterprise service buses really kind of ruled the day. And a lot of logic kind of went into those enterprise service buses as it relates to protocol transformation, dealing with message transformation. Really it was really hard to troubleshoot those kind of environments when things went wrong. So by having a simpler communication methodology it really allows these systems to not only still communicate effectively but be much more simple and much more robust because of it.
Dave: Another point that you brought up in your first section was around agnostic messaging and transfer protocols. And you spent a lot of time talking about REST. Can you delve a little bit into where REST fits best into this world of microservices?
Jonathan: Obviously REST is becoming incredibly popular, and really for good reason. It’s a very natural way to think. It utilizes the best of what HTTP does well. And because of that a lot of people have really flocked to using REST as that technology-agnostic protocol. But one of the big purposes that that blog was trying to cover is while REST is a really great capability, there are other tools in the toolbox that you can use in order to build the best microservices architecture possible.
Dave: That sort of leads to the next step that I read when I was reading this blog about this event-driven microservices. I’ve also seen it called event-oriented microservices. And that seems to be very driven in nature and has it’s own subcategory. Can you explain a little bit about what this event-driven microservices is really about?
Jonathan: Event driven microservices is really, you know, it’s leveraging the concepts of microservices, again, having a small, independent unit to code that are deployed, but that ultimately communicate in an event driven way. So instead of having everything sort of be request/reply and if you have a service that needs to call another service, you end up with sort of a cascading invocation list that’s all sort of blocking. It allows for applications to really just communicate asynchronously as events come in and allows the end-user experience which is what’s critical in a lot of applications to be as responsive and as performance as possible.
Dave: So event driven microservices are definitely a different category, but do they have different requirements for the way they message?
Jonathan: In the best case scenario what you really want your microservices to do is be very loosely coupled. Be non blocking for performance reasons. And, again, be sort of single purpose so that you’re not adding a whole bunch of complex failure scenario handling into your code. They should really just do the job that they’re supposed to do and ultimately hand off to another service if necessary to perform additional functionality. And ultimately one of the challenges really becomes that when you’re different teams are sort of sub composed and one team’s writing one microservice and another team’s writing another, that’s really an effective way to distribute the work and get work done. But where the challenge becomes is if it’s REST in between – the group that’s ultimately calling that other service, they have to know a lot about how that service is going to handle different scenarios. How long should I wait for a response to come back? What should I do when an error occurs? Should I just reinvoke it? How long should I wait before I reinvoke it? There’s a lot of things that you have to really consider in order to insure that you’re building a robust architecture.
Dave: This feels like it really fits into a category near to Solace. How would things like the message bus, message router environment aid microservices?
Jonathan: Again, you know, we’re not here to bash on REST. REST has its place, certainly with synchronous interactions. Absolutely on externally facing APIs. But when it comes down into where you’re trying to link up your different microservices within your architecture sort of changing the pattern that you’re thinking in, and thinking more about asynchronous, and thinking about how you could make use of events is really what’s key. And obviously by using messaging; messaging really empowers those asynchronous interactions and it’s just really a perfect setup for writing really robust microservices.
Dave: You know, in sort of crossing the streams a little bit here, you know, microservices have also come up with the rise of cloud. How do we see Cloud and microservices working together?
Jonathan: Microservices, because they’re independently deployable, and they’re small, the Cloud is really a good neighbor, if you will, to using microservices, because you can scale up and down your microservices just like you can scale up and down your cloud usage dynamically. And that’s really a good key for microservices. But the key is that if you’re, if you improperly architected your microservices tier where you are making use of a lot of REST internally and you’ve got a lot of different blocking; you sort of have to scale all of your microservices up at the same time in order to compensate for one slow microservice. By using more event driven architecture you can actually just scale up the single microservice that’s taking more time in order to provide more memory more CPU, or what have you, more resources. At the cloud level get that workload done, survive whatever that event is, you know, maybe it’s a new product offering on your website, whatever that event is, get through that and then scale down in order to save on your total cost of ownership for your system.
Dave: Jonathan, thanks for your time today. This has been really fascinating. Microservices are clearly a way that the future is going. And we hope that our audience and readers both take a chance to learn more about microservices. Enjoy the rest of your time there in New York. We appreciate you being with us today. For those who are watching us we are looking forward to seeing you on the next Solace Says. Remember, you can find Jonathan’s blog on solace.com/blog. And our technology is always available on laboratories at dev.solace.com. Thanks again and we look forward to seeing you on the next Solace Says.