Automated Continuous HDFS Replication to Apache Kafka



Use CData Sync for automated, continuous, customizable HDFS replication to Apache Kafka.

Always-on applications rely on automatic failover capabilities and real-time data access. CData Sync integrates live HDFS data into your Apache Kafka instance, allowing you to consolidate all of your data into a single location for archiving, reporting, analytics, machine learning, artificial intelligence and more.

Configure Apache Kafka as a Replication Destination

Using CData Sync, you can replicate HDFS data to Kafka. To add a replication destination, navigate to the Connections tab.

  1. Click Add Connection.
  2. Select Apache Kafka as a destination.
  3. Enter the necessary connection properties:

    • Bootstrap Servers - Enter the address of the Apache Kafka Bootstrap servers to which you want to connect.
    • Auth Scheme - Select the authentication scheme. Plain is the default setting. For this setting, specify your login credentials:
    • User - Enter the username that you use to authenticate to Apache Kafka.
    • Password - Enter the password that you use to authenticate to Apache Kafka.
    • Type Detection Scheme - Specify the detection-scheme type (None, RowScan, SchemaRegistry, or MessageOnly) that you want to use. The default type is None.
    • Use SSL - Specify whether you want to use the Secure Sockets Layer (SSL) protocol. The default value is False.
  4. Click Test Connection to ensure that the connection is configured properly.
  5. Click Save Changes.

Configure the HDFS Connection

You can configure a connection to HDFS from the Connections tab. To add a connection to your HDFS account, navigate to the Connections tab.

  1. Click Add Connection.
  2. Select a source (HDFS).
  3. Configure the connection properties.

    In order to authenticate, set the following connection properties:

    • Host: Set this value to the host of your HDFS installation.
    • Port: Set this value to the port of your HDFS installation. Default port: 50070
  4. Click Connect to ensure that the connection is configured properly.
  5. Click Save Changes.

Configure Replication Queries

CData Sync enables you to control replication with a point-and-click interface and with SQL queries. For each replication you wish to configure, navigate to the Jobs tab and click Add Job. Select the Source and Destination for your replication.

Replicate Entire Tables

To replicate an entire table, click Add Tables in the Tables section, choose the table(s) you wish to replicate, and click Add Selected Tables.

Customize Your Replication

You can use the Columns and Query tabs of a task to customize your replication. The Columns tab allows you to specify which columns to replicate, rename the columns at the destination, and even perform operations on the source data before replicating. The Query tab allows you to add filters, grouping, and sorting to the replication.

Schedule Your Replication

In the Schedule section, you can schedule a job to run automatically, configuring the job to run after specified intervals ranging from once every 10 minutes to once every month.

Once you have configured the replication job, click Save Changes. You can configure any number of jobs to manage the replication of your HDFS data to Kafka.

Ready to get started?

Learn more or sign up for a free trial:

CData Sync