apache-kafkadruidapache-kafka-mirrormaker

Migrate Kafka Topic to new Cluster (and impact on Druid)


I am ingesting data into Druid from Kafka's topic. Now I want to migrate my Kafka Topic to the new Kafka Cluster. What are the possible ways to do this without duplication of data and without downtime?
I have considered below possible ways to migrate Topic to the new Kafka Cluster.

  1. Manual Migration:
    • Create a topic with the same configuration in the new Kafka cluster.
    • Stop pushing data in the Kafka cluster.
    • Start pushing data in the new cluster.
    • Stop consuming from the old cluster.
    • Start consuming from the new cluster.
  2. Produce data in both Kafka clusters:
    • Create a topic with the same configuration in the new Kafka cluster.
    • Start producing messages in both Kafka clusters.
    • Change Kafka topic configration in Druid.
    • Reset Kafka topic offset in Druid.
    • Start consuming from the new cluster.
    • After successful migration, stop producing in the old Kafka cluster.
  3. Use Mirror Maker 2:
    • MM2 creates Kafka's topic in a new cluster.
    • Start replicating data in both clusters.
    • Move producer and consumer to the new Kafka cluster.
    • The problem with this approach:
      1. Druid manages Kafka topic's offset in its metadata.
      2. MM2 will create two topics with the same name(with prefix) in the new cluster.
      3. Does druid support the topic name with regex?

Note: Druid manages Kafka topic offset in its metadata.
Druid Version: 0.22.1
Old Kafka Cluster Version: 2.0


Solution

  • You can follow these steps:

    1- On your new cluster, create your new topic (the same name or new name, doesn't matter)

    2- Change your app config to send messages to new kafka cluster

    3- Wait till druid consume all messages from the old kafka, you can ensure when data is being consumed by checking supervisor's lagging and offset info

    4- Suspend the task, and wait for the tasks to publish their segment and exit successfully

    5- Edit druid's datasource, make sure useEarliestOffset is set to true, change the info to consume from new kafka cluster (and new topic name if it isn't the same)

    6- Save the schema and resume the task. Druid will hit the wall when checking the offset, because it cannot find them in new kafka, and then it starts from the beginning