Snowflake
docker pull solace/solace-pubsub-connector-snowflake
Click to Copy!
Asset Type:
Micro-Integrations
Provider:
Provider: Solace
Category:
Analytics & Stream Processing
Endpoint:
Target
Support:
Solace – Support Available
Platform:
Cloud-Managed, Self-Managed

Snowflake

Solace Micro-Integration for Snowflake provides integration of source events from Solace Event Broker to a Snowflake database via the Snowflake Snowpipe Streaming API (default) or the Snowpipe API using stages. The micro-integration provides for high performance ingest of event mesh data to Snowflake. This micro-integration does not provide Snowflake -> Solace data flows.

This integration is configured via “workflows” in your micro-integration instance. A workflow is a source-to-processing-to-target data pipeline configured within the micro-integration runtime. Each workflow (you can have up to 20 defined per Micro-Integration instance), defines a source (e.g. a Solace queue), any header processing necessary for the target, and a target destination. When active, events will stream across these workflows.

The micro-integration is available as:

  • A runnable package based on a Java JAR file including a start script
  • A container image suitable for running in a container runtime such as Docker or Podman

This is a self-contained micro-integration. All self-contained micro-integrations share a common architecture and provide a number of enterprise services such as:

  • A local management server accessible over HTTP(s) and JMX exposing endpoints for:
    • Health check
    • Metrics monitoring
    • Log file access
    • Workflow adminstration (start & stop workflows)
  • A common set of configuration options for:
    • logging – log levels, log file size, archive and rollover rules, appenders to export to other log services
    • security setup for management endpoints – authentication and authorization to the endpoints, TLS for HTTPS endpoints
  • Various runtime deployment options:
    • Standalone
    • Active_Standby – for redundancy (you can have more than 1 standby instance)
    • Active_Active – for horizontal scaling (where the source of data will support multiple active consumers such as a non-exclusive queue)