Solace lets you establish a “data river” that continuously flows into your data lake while simultaneously delivering it to applications, analytics engines and users that need it in real-time.
Using tools like Flume, Spark and Storm to ingest data into file datastores like Hadoop HDFS, or NoSQL datastores like HBase, Cassandra or MongoDB, they can capture, store and analyze all kinds of potentially valuable information.
Their business is powered by applications written in many languages, linked with a variety of ESBs, API gateways and web servers, running in diverse cloud, legacy and hybrid environments. Many of the interactions between these applications and logs they generate represent are events that should be captured and analyzed, and would be interesting to other applications in real-time.
Architect your system horizontally in the cloud or with high-capacity appliances; either way Solace routes millions of messages a second to hundreds of thousands of publishers and consumers.
High-speed fanout and sophisticated shock absorption let you stream data to applications and analytics engines in real-time while also delivering it to non-real-time systems for batch analytics and queries.
Out-of-the-box integration with popular big data products and open source projects makes it easy to incorporate big data into your data movement infrastructure.
Use your favorite open APIs and Protocols to aggregate big data from, and distribute it to, all of your applications, IoT devices and users.
Big data lakes and streaming analytics are important consumers, and Solace treats them like all of the other important consumers of your critical data.
The Solace approach eliminates the platform proliferation problem associated with using ingestion-only solutions, keeping your architecture simple for easy manageability, maximum robustness and elastic scalability.