New Integration Guide for Apache Flink Now Available

I recently created a new integration guide for an Apache project called Flink. In the words of the Flink site itself, “Flink is an open-source stream processing framework for distributed, high-performing, always-available, and accurate data streaming applications.” You can get a slightly deeper introduction to Flink here, but in summary Flink Streaming supports high-throughput, fault-tolerant stream processing of live data streams for continuous processing of unbounded datasets.

The Flink Streaming generic SourceFunction is a simple interface that allows third party applications to push data into Flink in an efficient manner. This integration guide includes a simple example JMS consumer library with basic SourceFunction<OUT> instances for JMS queues and topics. It’s called flink-jms-connector code it’s totally spec-compliant so you could use it with other JMS providers, unlike many of the oddly vendor-specific open source “JMS” modules you’ll find.  The library lets you plug in your own function to translate inbound JMS Message objects to a target type consumed by Flink, which is all of the code you’ll need to use this library.

I wrote this guide for developers who already familiar with the basics of both Flink and JMS. From there it walks you through a fairly technical but proven best-practices based way of setting up Flink Streaming source functions to consume JMS messages. Toward that end I covered the basics of integrating Flink with our JMS API, explained some of the finer points of achieving high availability through fault tolerant pairing of message routers, and provided a few tips that I think will help you debug any problems that arise. I hope you find it helpful!