Using tools like Flume, Spark and Storm to ingest data into file datastores like Hadoop HDFS, or NoSQL datastores like HBase, Cassandra or MongoDB, they can capture, store and analyze all kinds of potentially valuable information.
Their business is powered by applications written in many languages, linked with a variety of ESBs, API gateways and web servers, running in diverse cloud, legacy and hybrid environments. Many of the interactions between these applications and logs they generate represent are events that should be captured and analyzed, and would be interesting to other applications in real-time.
Architect your system horizontally in the cloud or with high-capacity appliances; either way Solace routes millions of messages a second to hundreds of thousands of publishers and consumers.
High-speed fanout and sophisticated shock absorption let you stream data to applications and analytics engines in real-time while also delivering it to non-real-time systems for batch analytics and queries.
Solace takes a holistic approach to big data aggregation by creating a “data river” that flows from application interactions into your data lake while simultaneously delivering it to applications, analytics engines and users.
Big data lakes and streaming analytics are important consumers, and Solace treats them like all of the other important consumers of your critical data.
The Solace approach eliminates the platform proliferation problem associated with using ingestion-only solutions, keeping your architecture simple for easy manageability, maximum robustness and elastic scalability.