Note: Not talking about mirror maker2.
I'd like the replicate a topic from source cluster to the target kafka cluster. Let's say I'm gonna replicate topic that has 32 partitions.
Deploying: topic, 32 partitions => 32 consumers
So, the number of partitions of the source topic changed to the 48 in the source system.
In that case, is kafka mirror maker1 going to detect new added partitions (new 16 partitions) automatically without re-deploying the kafka mirror maker1 application again?
It will detect partition changes, yes, as any consumer does, and start consuming them upon the next metadata update.
However, it'll not replicate the partition change to increase the destination topic partitions; it'll (re)partition the data into however many partitions exist in the destination topic