we are currently switching form an older version of Eclipe Paho MQTT Client to Version 1.2 of HiveMQ MQTT Client. https://github.com/hivemq/hivemq-mqtt-client
Currently playing around with the Aync- version of the client which needs a Consumer function as a callback.
One of our MQTT Client Applications has to process/consumer a lot messages on many different topics and the processing of one message should not have to wait for the previous to finish. We are not sure what's the best way to achieve parallel processing of messages with only one client instance.
In the documentation above there is an optional executor that can be defined
client.subscribeWith()
.topicFilter("test/topic")
.qos(MqttQos.EXACTLY_ONCE)
.callback(System.out::println)
.executor(executor) // optional
.send();
How should the AsyncClient behave when no executor is defined? Then everything is processed serially in a blocking way? That somehow seems to defeat the purpose of defining async with a callback....
In our old implementation we where using shared subscriptions (which was a non standard feature in HiveMQ 3 and now is and standard feature of MQTT 5) with multiple instances of the client constantly waiting for the same topics to process them alternatingly.
However given the HiveMQ CLient API (which unfortunately lacks some more explanation or examples) we hope to fine a more elegant and simple way to achieve parallel processing with or Thread pool or something!
Any help appreciated!
Usually shared subscriptions are only needed when scaling an application to multiple machines. If your processing of the messages can be parallelized, then there should be no reason to use a shared subscription on a single machine. If the message load will increase in the future, you can still choose shared subscriptions to scale out to multiple machines later.
As MQTT provides ordering guarantees the HiveMQ MQTT Client calls the same callback serially. Multiple callbacks for different subscriptions are executed in parallel. For a single callback only your application can choose to break up the ordering. To do this, you can just hand over the messages from the callback to parallel workers.