activemq-artemis

Heap memory use more than global-max-size specified


I'm testing the memory allocation of ActiveMQ Artemis (2.37.0 and 2.41.0) in different scenarios. One of the scenarios is how the global-max-size parameter can reduce memory consumption while accepting many incoming messages. My scenario: Each of 100 amqp producers (jms-qpid 1.13.0) sends 200 messages (70kb) concurrently (24 threads) - total size about 1.4Gb.

In logs I see that paging is started:

2025-06-19 12:23:13,409 INFO  [org.apache.activemq.artemis.core.server] AMQ222038: Starting paging on address '71dfb642-47eb-45de-a6a5-21ccc2c56d38'; size=8686941 bytes (123 messages); maxSize=10485760 bytes (-1 messages); globalSize=211382291 bytes (2993 messages); globalMaxSize=209715200 bytes (-1 messages);

But profiling shows heap memory used is about 1.4Gb (near same as total message size): enter image description here

Same behaviour for NIO and JDBC journal.

Could someone explain how global-max-size works and how to configure the broker to stop loading all messages to memory?

The heap contains many QueueImpl objects. I'm using auto-created addresses and queues. Could this be an issue? If I also start queue consumers the heap decreases significantly (about 3-4 times)

enter image description here

Also I've observed that limiting page-size-bytes=1M and max-read-page-bytes=2M reduces the heap memory twice (about 700Mb). It`s quite unexpected for me, because nothing external is reading from queues. enter image description here


Solution

  • The global-max-size only applies to message data. Keep in mind that the JVM's heap contains much more than just messages so setting a global-max-size of 200Mb doesn't mean that the heap will only be around 200 megabytes. The broker has to keep track of all the addresses, queues, metrics, connections, sessions, consumers, etc. According to the heap-dump data you provided it looks like you might have a large number of queues. Every individual queue has a memory overhead regardless of how many messages it contains.

    Generally speaking, you can expect memory usage to drop as you consume messages - especially as the queues empty out completely.

    Also, it's worth noting that changing the journal type won't meaningfully change how much memory the broker uses with regard to paging since the paging subsystem is abstracted away from the journal implementation.