dockerhornetq

Connection refused when HornetQ runs in Docker


I'm testing a very simple scenario, I'm running the test located under examples/jms/queue on a standalone server running locally on my computer with success. Running the same on a dockerized HornetQ 2.4.0 gives me the error:

Connection refused: connect

I made sure to open port 1099 and I can see the port open,

0.0.0.0:1099->1099/tcp

Telnet-ing to localhost 1099 gives a gibberish result with means there is something there listening but running the test connecting to jnp://localhost:1099 as I said it's failing.

Finally the configuration of hornetq-beans.xml:

<bean name="StandaloneServer" class="org.hornetq.jms.server.impl.StandaloneNamingServer">
   <constructor>
      <parameter>
         <inject bean="HornetQServer"/>
      </parameter>
   </constructor>
   <property name="port">1099</property>
   <property name="bindAddress">0.0.0.0</property>
   <property name="rmiPort">1098</property>
   <property name="rmiBindAddress">0.0.0.0</property>
</bean>

Result of netstat -plunt:

# netstat -plunt
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
tcp        0      0 0.0.0.0:5445            0.0.0.0:*               LISTEN      10/java
tcp        0      0 0.0.0.0:1098            0.0.0.0:*               LISTEN      10/java
tcp        0      0 0.0.0.0:1099            0.0.0.0:*               LISTEN      10/java
tcp        0      0 0.0.0.0:39437           0.0.0.0:*               LISTEN      10/java
tcp        0      0 0.0.0.0:5455            0.0.0.0:*               LISTEN      10/java

My Dockerfile:

FROM openjdk:8

WORKDIR /app

COPY ./hornetq-2.4.0.Final .

EXPOSE 1099 1098 5445 5455

ENTRYPOINT [ "/bin/bash", "-c", "cd bin/; ./run.sh" ]

The updated part of hornetq-configuration.xml:

<connectors>
   <connector name="netty">
      <factory-class>org.hornetq.core.remoting.impl.netty.NettyConnectorFactory</factory-class>
      <param key="host"  value="0.0.0.0"/>
      <param key="port"  value="5445"/>
   </connector>
   
   <connector name="netty-throughput">
      <factory-class>org.hornetq.core.remoting.impl.netty.NettyConnectorFactory</factory-class>
      <param key="host"  value="0.0.0.0"/>
      <param key="port"  value="5455"/>
      <param key="batch-delay" value="50"/>
   </connector>
</connectors>

<acceptors>
   <acceptor name="netty">
      <factory-class>org.hornetq.core.remoting.impl.netty.NettyAcceptorFactory</factory-class>
      <param key="host"  value="0.0.0.0"/>
      <param key="port"  value="5445"/>
   </acceptor>
   
   <acceptor name="netty-throughput">
      <factory-class>org.hornetq.core.remoting.impl.netty.NettyAcceptorFactory</factory-class>
      <param key="host"  value="0.0.0.0"/>
      <param key="port"  value="5455"/>
      <param key="batch-delay" value="50"/>
      <param key="direct-deliver" value="false"/>
   </acceptor>
</acceptors>

The updated part of hornetq-beans.xml:

<bean name="StandaloneServer" class="org.hornetq.jms.server.impl.StandaloneNamingServer">
   <constructor>
      <parameter>
         <inject bean="HornetQServer"/>
      </parameter>
   </constructor>
   <property name="port">1099</property>
   <property name="bindAddress">0.0.0.0</property>
   <property name="rmiPort">1098</property>
   <property name="rmiBindAddress">0.0.0.0</property>
</bean>

The command I'm using to run the image is:

docker run -d -p 1098:1098 -p 1099:1099 -p 5445:5445 -p 5455:5455 hornetq

Solution

  • The host values of 0.0.0.0 for your connector configurations in hornetq-configuration.xml are invalid. This is why the broker logs:

    Invalid "host" value "0.0.0.0" detected for "netty" connector. Switching to "8ba14b02658a". If this new address is incorrect please manually configure the connector to use the proper one.
    

    I assume 8ba14b02658a is not the proper host value which is why it continues to fail. Therefore, as the log indicates, you need to configure it with a valid value for your environment. This needs to be a hostname or IP address that the client on your host can use to connect to the broker running in Docker. This is because the connector is simply a configuration holder (sometimes called a "stub") which is passed back to the remote client when it performs the JNDI lookup. The remote client then uses this stub to make the actual JMS connection to the broker. Therefore, whatever is configured as the host and port for the connector is what the client will use.

    A simpler option would be to use --network host when you run the Docker container, e.g.:

    docker run -d --network host hornetq
    

    This will make the container use the host's network. Once you set the host values for your connector configurations in hornetq-configuration.xml back to localhost everything should work. You can read more about this option in the Docker documentation.

    It's worth noting that there hasn't been a release of HornetQ in almost 5 years now. The HornetQ code-base was donated to the Apache ActiveMQ community in June of 2015 and is now known as ActiveMQ Artemis - the next-generation broker from ActiveMQ. I would strongly recommend migrating to ActiveMQ Artemis and discontinuing use of HornetQ.

    Furthermore, if you did migrate to ActiveMQ Artemis you wouldn't experience this particular problem as the JNDI implementation has changed completely. There is no longer an actual JNDI server. The JNDI implementation is 100% client-side so you'd just need to configure the URL in the JNDI properties.