apache-kafkakafka-consumer-apispring-cloud-stream

Kafka error handling and deadletter queues


Can someone help me understand why we need the deadletter queue mechanism when we have kafka consumer offsets to our rescue . If i receive a message in my kafka consumer , and i can always choose to commit my offsets . So if some failure happens , for example: if consumer shuts down while processing a message , the offset commit ( which happens in code ) is not performed and when consumer comes back up again it reads from the last position . Isn't this enough to keep things simple . Why do we need to enable DLQ and route failed messages to DLQ , is there any added advantage or am i missing something important ? By having a DLQ i have to write code to send messages from DLQ to the main topic thus complicating things


Solution

  • When you commit offset N, next time you consume you will start fetching at N+1. You don't have the granularity to commit each message individually.

    In other words, if you've got 10 messages in your queue, and N=5 fails to be processed, you are unable to say that you have processed any message after 5 if you want to be able to consume 5 again in the future.

    So the problem is when only a few messages fail to be processed, these are the ones that you send to the DLQ.