Ready to get started?

Download a free trial of the Apache Kafka Driver to get started:

 Download Now

Learn more:

Apache Kafka Icon Apache Kafka JDBC Driver

Rapidly create and deploy powerful Java applications that integrate with Apache Kafka.

Connect to Kafka Data in HULFT Integrate



Connect to Kafka as a JDBC data source in HULFT Integrate

HULFT Integrate is a modern data integration platform that provides a drag-and-drop user interface to create cooperation flows, data conversion, and processing so that complex data connections are easier than ever to execute. When paired with the CData JDBC Driver for Apache Kafka, HULFT Integrate can work with live Kafka data. This article walks through connecting to Kafka and moving the data into a CSV file.

With built-in optimized data processing, the CData JDBC Driver offers unmatched performance for interacting with live Kafka data. When you issue complex SQL queries to Kafka, the driver pushes supported SQL operations, like filters and aggregations, directly to Kafka and utilizes the embedded SQL engine to process unsupported operations client-side (often SQL functions and JOIN operations). Its built-in dynamic metadata querying allows you to work with and analyze Kafka data using native data types.

Enable Access to Kafka

To enable access to Kafka data from HULFT Integrate projects:

  1. Copy the CData JDBC Driver JAR file (and license file if it exists), cdata.jdbc.apachekafka.jar (and cdata.jdbc.apachekafka.lic), to the jdbc_adapter subfolder for the Integrate Server
  2. Restart the HULFT Integrate Server and launch HULFT Integrate Studio

Build a Project with Access to Kafka Data

Once you copy the JAR files, you can create a project with access to Kafka data. Start by opening Integrate Studio and creating a new project.

  1. Name the project
  2. Ensure the "Create script" checkbox is checked
  3. Click Next
  4. Name the script (e.g.: ApacheKafkatoCSV)

Once you create the project, add components to the script to copy Kafka data to a CSV file.

Configure an Execute Select SQL Component

Drag an "Execute Select SQL" component from the Tool Palette (Database -> JDBC) into the Script workspace.

  1. In the "Required settings" tab for the Destination, click "Add" to create a new connection for Kafka. Set the following properties:
    • Name: Kafka Connection Settings
    • Driver class name: cdata.jdbc.apachekafka.ApacheKafkaDriver
    • URL: jdbc:apachekafka:User=admin;Password=pass;BootStrapServers=https://localhost:9091;Topic=MyTopic;

      Built-in Connection String Designer

      For assistance constructing the JDBC URL, use the connection string designer built into the Kafka JDBC Driver. Either double-click the JAR file or execute the JAR file from the command-line.

      java -jar cdata.jdbc.apachekafka.jar

      Fill in the connection properties and copy the connection string to the clipboard.

      Set BootstrapServers and the Topic properties to specify the address of your Apache Kafka server, as well as the topic you would like to interact with.

      Authorization Mechanisms

      • SASL Plain: The User and Password properties should be specified. AuthScheme should be set to 'Plain'.
      • SASL SSL: The User and Password properties should be specified. AuthScheme should be set to 'Scram'. UseSSL should be set to true.
      • SSL: The SSLCert and SSLCertPassword properties should be specified. UseSSL should be set to true.
      • Kerberos: The User and Password properties should be specified. AuthScheme should be set to 'Kerberos'.

      You may be required to trust the server certificate. In such cases, specify the TrustStorePath and the TrustStorePassword if necessary.

  2. Write your SQL statement. For example:
    SELECT Id, Column1 FROM SampleTable_1
  3. Click "Extraction test" to ensure the connection and query are configured properly
  4. Click "Execute SQL statement and set output schema"
  5. Click "Finish"

Configure a Write CSV File Component

Drag a "Write CSV File" component from the Tool Palette (File -> CSV) onto the workspace.

  1. Set a file to write the query results to (e.g. SampleTable_1.csv)
  2. Set "Input data" to the "Select SQL" component
  3. Add columns for each field selected in the SQL query
  4. In the "Write settings" tab, check the checkbox to "Insert column names into first row"
  5. Click "Finish"

Map Kafka Fields to the CSV Columns

Map each column from the "Select" component to the corresponding column for the "CSV" component.

Finish the Script

Drag the "Start" component onto the "Select" component and the "CSV" component onto the "End" component. Build the script and run the script to move Kafka data into a CSV file.

Download a free, 30-day trial of the CData JDBC Driver for Apache Kafka and start working with your live Kafka data in HULFT Integrate. Reach out to our Support Team if you have any questions.