apache-kafkaspring-kafkakafka-producer-apikafka-transactions-api

Zombie fencing of a single producer with Spring Kafka


I have a Spring Kafka application, which has a single producer, which is NOT part of a pure kafka consume->process->produce chain, i.e. the producer is not triggered by a kafka consumer.

According to my current understanding of Kafka's Zombie fencing feature, I would like this producer to always use the same transactional.id, so if a new instance of this app starts up, all old instances are fenced off.

Now the Spring Kafka docs say, that the DefaultKafkaProducerFactory maintains a cache of producers, which have transactional.ids which differ by an integer suffix. It is not clear to me under which circumstances I will end up with a producer with a different transactional.id and if it's possible to achieve the reliable fencing of old app instances with Spring Kafka.

Any clarifications are greatly appreciated.


Solution

  • The cache is needed to support multi-threading (you cannot use the same transactional producer concurrently on different threads); if you only use one thread (or ensure that the sends are under the control of a lock or synchronization), there will only be one transactional producer.

    If you want different behavior, you can create your own implementation of ProducerFactory (but you will still have the same issues, regarding multi-threading).