跳到主要内容

Accelerate GenAI - Stream Data from MySQL to Kafka

· 阅读需 3 分钟
John Li
John Li

Overview

In the age of AI, Apache Kafka is becoming a pivotal force due to its high-performance in real-time data streaming and processing. Many organizations are seeking to integrate data to Kafka for an enhanced efficiency and business agility. In this case, a powerful tool for data movement is of great importance. BladePipe is one of the excellent choices.

This tutorial describes how to move data from MySQL to Kafka with BladePipe, using the CloudCanal Json Format by default. The key features of the pipeline include:

  • Support multiple message formats.
  • Support DDL synchronization. You can configure the topic to which the DDL operations are written.
  • Support automatic topic creation.

Highlights

Automatic Topic Creation

The topics can be automatically created in the target Kafka during the DataJob creation. Besides, you can configure the number of partitions based on your needs.

Batch Writing of Data

In BladePipe, the same type of operations on the same table are merged into a single message, enabling batch writing of data and reducing bandwidth usage. Thus, the data processing efficiency is significantly increased.

image.png

Resumable DataJob

Resumability is essential for the synchronization of large tables with billions of records.

By regularly recording the offsets, BladePipe allows resuming Full Data and Incremental DataTasks from the last offset after they are restarted, thus minimizing the impact of unexpected pauses on progress.

Procedure

Step 1: Install BladePipe

Follow the instructions in Install Worker (Docker) or Install Worker (Binary) to download and install a BladePipe Worker.

Step 2: Add DataSources

  1. Log in to the BladePipe Cloud.
  2. Click DataSource > Add DataSource.
  3. Select the source and target DataSource type, and fill out the setup form. image.png

Step 3: Create a DataJob

  1. Click DataJob > Create DataJob.

  2. Select the source and target DataSources, and click Test Connection to ensure the connection to the source and target DataSources are both successful.
    In the Advanced configuration of the target DataSource, choose CloudCanal Json Format for Message Format.

    image.png

  3. Select Incremental for DataJob Type, together with the Full Data option.

    image.png

  4. Select the tables and columns to be replicated. When selecting the columns, you can configure the number of partitions in the target topics.

    image.png

  5. Confirm DataJob creation.

    信息

    The DataJob creation process involves several steps. Click Sync Settings > ConsoleJob, find the DataJob creation record, and click Details to view it.

    The DataJob creation with a source MySQL instance includes the following steps:

    • Schema Migration
    • Allocation of DataJobs to BladePipe Workers
    • Creation of DataJob FSM (Finite State Machine)
    • Completion of DataJob Creation
  6. Now the DataJob is created and started. BladePipe will automatically run the following DataTasks:

    • Schema Migration: The schemas of the source tables will be migrated to the target database.
    • Full Data Migration: All existing data from the source tables will be fully migrated to the target database.
    • Incremental Data Synchronization: Ongoing data changes will be continuously synchronized to the target instance.

FAQ

What other source DataSources does BladePipe support?

Currently, you can create a connection from MySQL, Oracle, SQL Server, PostgreSQL and MongoDB to Kafka. If you have any other requests, please give us feedbacks in the community.

Latest blog posts

Back to blogarrow-right
10 Best Data Integration Tools in 2025
Data insights

10 Best Data Integration Tools in 2025

Discover the top 10 data integration tools in 2025.

Barry
Barry
Nov 20, 2025
Be & Cheery Drives Retail Analytics in Real Time with BladePipe
User stories

Be & Cheery Drives Retail Analytics in Real Time with BladePipe

Discover how Be & Cheery builds a unified data-integration platform using BladePipe to support data-driven decision-making.

Zoe
Zoe
Nov 17, 2025
Choosing Your Data Lake Format in 2025:Iceberg vs Delta Lake vs Paimon
Data insights

Choosing Your Data Lake Format in 2025:Iceberg vs Delta Lake vs Paimon

A deep dive into how these open lake formats differ, and how to build a real-time data lake that actually works.

Barry
Barry
Oct 22, 2025