dockeropenshiftactivemq-artemis

"Unable to announce backup" warning when an ActiveMQ Artemis master/slave pair is deployed in Openshift


I'm trying to deploy a cluster of ActiveMQ Artemis in master/slave mode in Openshift, but i get an this WARN continually:

2019-01-09 07:50:40,192 WARN  [org.apache.activemq.artemis.core.server] AMQ222137: Unable to announce backup, retrying: ActiveMQConnectionTimedOutException[errorType=CONNECTION_TIMEDOUT message=AMQ119012: Timed out waiting to receive initial broadcast from cluster]
   at org.apache.activemq.artemis.core.client.impl.ServerLocatorImpl.createSessionFactory(ServerLocatorImpl.java:759) [artemis-core-client-2.6.3.jar:2.6.3]
   at org.apache.activemq.artemis.core.client.impl.ServerLocatorImpl.connect(ServerLocatorImpl.java:635) [artemis-core-client-2.6.3.jar:2.6.3]
   at org.apache.activemq.artemis.core.client.impl.ServerLocatorImpl.connect(ServerLocatorImpl.java:617) [artemis-core-client-2.6.3.jar:2.6.3]
   at org.apache.activemq.artemis.core.server.cluster.BackupManager$BackupConnector$1.run(BackupManager.java:246) [artemis-server-2.6.3.jar:2.6.3]
   at org.apache.activemq.artemis.utils.actors.OrderedExecutor.doTask(OrderedExecutor.java:42) [artemis-commons-2.6.3.jar:2.6.3]
   at org.apache.activemq.artemis.utils.actors.OrderedExecutor.doTask(OrderedExecutor.java:31) [artemis-commons-2.6.3.jar:2.6.3]
   at org.apache.activemq.artemis.utils.actors.ProcessorBase.executePendingTasks(ProcessorBase.java:66) [artemis-commons-2.6.3.jar:2.6.3]
   at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [rt.jar:1.8.0_181]
   at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [rt.jar:1.8.0_181]
   at org.apache.activemq.artemis.utils.ActiveMQThreadFactory$1.run(ActiveMQThreadFactory.java:118) [artemis-commons-2.6.3.jar:2.6.3]

When I use this configuration in the same machine it works without problems. But when I Dockerize it and deploy one broker in each container it fails.

Here is the configuration for broker 1:

<?xml version="1.0"?>

<configuration xmlns="urn:activemq" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xi="http://www.w3.org/2001/XInclude" xsi:schemaLocation="urn:activemq /schema/artemis-configuration.xsd">
  <core xmlns="urn:activemq:core" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="urn:activemq:core ">
    <name>0.0.0.0</name>
    <persistence-enabled>true</persistence-enabled>
    <journal-type>ASYNCIO</journal-type>
    <paging-directory>data/paging</paging-directory>
    <bindings-directory>data/bindings</bindings-directory>
    <journal-directory>data/journal</journal-directory>
    <large-messages-directory>data/large-messages</large-messages-directory>
    <journal-datasync>true</journal-datasync>
    <journal-min-files>2</journal-min-files>
    <journal-pool-files>10</journal-pool-files>
    <journal-file-size>10M</journal-file-size>
    <journal-buffer-timeout>36000</journal-buffer-timeout>
    <journal-max-io>4096</journal-max-io>
    <disk-scan-period>5000</disk-scan-period>
    <max-disk-usage>90</max-disk-usage>
    <critical-analyzer>true</critical-analyzer>
    <critical-analyzer-timeout>120000</critical-analyzer-timeout>
    <critical-analyzer-check-period>60000</critical-analyzer-check-period>
    <critical-analyzer-policy>HALT</critical-analyzer-policy>
    <ha-policy>
     <shared-store>
      <master>
       <failover-on-shutdown>true</failover-on-shutdown>
      </master>
     </shared-store>
    </ha-policy>

    <connectors>
         <connector name="netty-connector">tcp://0.0.0.0:61616</connector>
    </connectors>

    <acceptors>
         <acceptor name="netty-acceptor">tcp://0.0.0.0:61616</acceptor>
    </acceptors>

    <broadcast-groups>
       <broadcast-group name="bg-group1">
          <group-address>${udp-address:231.7.7.7}</group-address>
          <group-port>9876</group-port>
          <broadcast-period>1000</broadcast-period>
          <connector-ref>netty-connector</connector-ref>
       </broadcast-group>
    </broadcast-groups>

    <discovery-groups>
       <discovery-group name="dg-group1">
          <group-address>${udp-address:231.7.7.7}</group-address>
          <group-port>9876</group-port>
          <refresh-timeout>60000</refresh-timeout>
       </discovery-group>
    </discovery-groups>

    <cluster-connections>
       <cluster-connection name="my-cluster">
          <connector-ref>netty-connector</connector-ref>
          <discovery-group-ref discovery-group-name="dg-group1"/>
       </cluster-connection>
    </cluster-connections>

    <security-settings>
      <security-setting match="#">
        <permission type="createNonDurableQueue" roles="amq"/>
        <permission type="deleteNonDurableQueue" roles="amq"/>
        <permission type="createDurableQueue" roles="amq"/>
        <permission type="deleteDurableQueue" roles="amq"/>
        <permission type="createAddress" roles="amq"/>
        <permission type="deleteAddress" roles="amq"/>
        <permission type="consume" roles="amq"/>
        <permission type="browse" roles="amq"/>
        <permission type="send" roles="amq"/>
        <!-- we need this otherwise ./artemis data imp wouldn't work -->
        <permission type="manage" roles="amq"/>
      </security-setting>
    </security-settings>
    <address-settings>
      <!-- if you define auto-create on certain queues, management has to be auto-create -->
      <address-setting match="activemq.management#">
        <dead-letter-address>DLQ</dead-letter-address>
        <expiry-address>ExpiryQueue</expiry-address>
        <redelivery-delay>0</redelivery-delay>
        <!-- with -1 only the global-max-size is in use for limiting -->
        <max-size-bytes>-1</max-size-bytes>
        <message-counter-history-day-limit>10</message-counter-history-day-limit>
        <address-full-policy>PAGE</address-full-policy>
        <auto-create-queues>true</auto-create-queues>
        <auto-create-addresses>true</auto-create-addresses>
        <auto-create-jms-queues>true</auto-create-jms-queues>
        <auto-create-jms-topics>true</auto-create-jms-topics>
      </address-setting>
      <!--default for catch all-->
      <address-setting match="#">
        <dead-letter-address>DLQ</dead-letter-address>
        <expiry-address>ExpiryQueue</expiry-address>
        <redelivery-delay>0</redelivery-delay>
        <!-- with -1 only the global-max-size is in use for limiting -->
        <max-size-bytes>-1</max-size-bytes>
        <message-counter-history-day-limit>10</message-counter-history-day-limit>
        <address-full-policy>PAGE</address-full-policy>
        <auto-create-queues>true</auto-create-queues>
        <auto-create-addresses>true</auto-create-addresses>
        <auto-create-jms-queues>true</auto-create-jms-queues>
        <auto-create-jms-topics>true</auto-create-jms-topics>
      </address-setting>
    </address-settings>
    <addresses>
      <address name="DLQ">
        <anycast>
          <queue name="DLQ"/>
        </anycast>
      </address>
      <address name="ExpiryQueue">
        <anycast>
          <queue name="ExpiryQueue"/>
        </anycast>
      </address>
    </addresses>
  </core>
</configuration>

Here is the configuration for broker 2:

<?xml version="1.0"?>

<configuration xmlns="urn:activemq" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xi="http://www.w3.org/2001/XInclude" xsi:schemaLocation="urn:activemq /schema/artemis-configuration.xsd">
  <core xmlns="urn:activemq:core" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="urn:activemq:core ">
    <name>0.0.0.0</name>
    <persistence-enabled>true</persistence-enabled>
    <journal-type>ASYNCIO</journal-type>
    <paging-directory>data/paging</paging-directory>
    <bindings-directory>data/bindings</bindings-directory>
    <journal-directory>data/journal</journal-directory>
    <large-messages-directory>data/large-messages</large-messages-directory>
    <journal-datasync>true</journal-datasync>
    <journal-min-files>2</journal-min-files>
    <journal-pool-files>10</journal-pool-files>
    <journal-file-size>10M</journal-file-size>
    <journal-max-io>4096</journal-max-io>
    <disk-scan-period>5000</disk-scan-period>
    <max-disk-usage>90</max-disk-usage>
    <!-- should the broker detect dead locks and other issues -->
    <critical-analyzer>true</critical-analyzer>
    <critical-analyzer-timeout>120000</critical-analyzer-timeout>
    <critical-analyzer-check-period>60000</critical-analyzer-check-period>
    <critical-analyzer-policy>HALT</critical-analyzer-policy>
    <ha-policy>
     <shared-store>
      <slave>
       <failover-on-shutdown>true</failover-on-shutdown>
      </slave>
     </shared-store>
    </ha-policy>

    <connectors>
         <connector name="netty-connector">tcp://0.0.0.0:61617</connector>
    </connectors>

    <acceptors>
         <acceptor name="netty-acceptor">tcp://0.0.0.0:61617</acceptor>
    </acceptors>

    <broadcast-groups>
       <broadcast-group name="bg-group1">
          <group-address>${udp-address:231.7.7.7}</group-address>
          <group-port>9876</group-port>
          <broadcast-period>1000</broadcast-period>
          <connector-ref>netty-connector</connector-ref>
       </broadcast-group>
    </broadcast-groups>

    <discovery-groups>
       <discovery-group name="dg-group1">
          <group-address>${udp-address:231.7.7.7}</group-address>
          <group-port>9876</group-port>
          <refresh-timeout>60000</refresh-timeout>
       </discovery-group>
    </discovery-groups>

    <cluster-connections>
       <cluster-connection name="my-cluster">
          <connector-ref>netty-connector</connector-ref>
          <discovery-group-ref discovery-group-name="dg-group1"/>
       </cluster-connection>
    </cluster-connections>

    <security-settings>
      <security-setting match="#">
        <permission type="createNonDurableQueue" roles="amq"/>
        <permission type="deleteNonDurableQueue" roles="amq"/>
        <permission type="createDurableQueue" roles="amq"/>
        <permission type="deleteDurableQueue" roles="amq"/>
        <permission type="createAddress" roles="amq"/>
        <permission type="deleteAddress" roles="amq"/>
        <permission type="consume" roles="amq"/>
        <permission type="browse" roles="amq"/>
        <permission type="send" roles="amq"/>
        <!-- we need this otherwise ./artemis data imp wouldn't work -->
        <permission type="manage" roles="amq"/>
      </security-setting>
    </security-settings>
    <address-settings>
      <!-- if you define auto-create on certain queues, management has to be auto-create -->
      <address-setting match="activemq.management#">
        <dead-letter-address>DLQ</dead-letter-address>
        <expiry-address>ExpiryQueue</expiry-address>
        <redelivery-delay>0</redelivery-delay>
        <!-- with -1 only the global-max-size is in use for limiting -->
        <max-size-bytes>-1</max-size-bytes>
        <message-counter-history-day-limit>10</message-counter-history-day-limit>
        <address-full-policy>PAGE</address-full-policy>
        <auto-create-queues>true</auto-create-queues>
        <auto-create-addresses>true</auto-create-addresses>
        <auto-create-jms-queues>true</auto-create-jms-queues>
        <auto-create-jms-topics>true</auto-create-jms-topics>
      </address-setting>
      <!--default for catch all-->
      <address-setting match="#">
        <dead-letter-address>DLQ</dead-letter-address>
        <expiry-address>ExpiryQueue</expiry-address>
        <redelivery-delay>0</redelivery-delay>
        <!-- with -1 only the global-max-size is in use for limiting -->
        <max-size-bytes>-1</max-size-bytes>
        <message-counter-history-day-limit>10</message-counter-history-day-limit>
        <address-full-policy>PAGE</address-full-policy>
        <auto-create-queues>true</auto-create-queues>
        <auto-create-addresses>true</auto-create-addresses>
        <auto-create-jms-queues>true</auto-create-jms-queues>
        <auto-create-jms-topics>true</auto-create-jms-topics>
      </address-setting>
    </address-settings>
    <addresses>
      <address name="DLQ">
        <anycast>
          <queue name="DLQ"/>
        </anycast>
      </address>
      <address name="ExpiryQueue">
        <anycast>
          <queue name="ExpiryQueue"/>
        </anycast>
      </address>
    </addresses>
  </core>
</configuration>

After enabling DEBUG logging this shows up:

2019-01-10 15:40:37,753 DEBUG [org.apache.activemq.artemis.core.server.cluster.BackupManager] DiscoveryBackupConnector [group=DiscoveryGroupConfiguration{name='dg-group1', refreshTimeout=60000, discoveryInitialWaitTimeout=10000}]:: announcing TransportConfiguration(name=netty-connector, factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory) ?port=61616&host=10-130-0-119 to ServerLocatorImpl (identity=backupLocatorFor='ActiveMQServerImpl::serverUUID=0cef6ba4-14ee-11e9-83b3-0a580a820077') [initialConnectors=[], discoveryGroupConfiguration=DiscoveryGroupConfiguration{name='dg-group1', refreshTimeout=60000, discoveryInitialWaitTimeout=10000}]

Solution

  • There are a couple of problems here...

    First, your connector configurations are incorrect. On broker 1 you're using this:

    <connector name="netty-connector">tcp://0.0.0.0:61616</connector>
    

    And on broker 2 you're using this:

    <connector name="netty-connector">tcp://0.0.0.0:61617</connector>
    

    This connector information is sent by each member of the cluster to inform other cluster members how they can connect back to the node which sent the information. For example, in your case broker 1 is telling broker 2 that it can connect back to broker 1 using tcp://0.0.0.0:61616. This is, of course, not true since the meta-address 0.0.0.0 doesn't actually point to broker 1. When broker 2 tries to use this URL it will fail as you're seeing.

    The reason this works when running both brokers on the same host is because 0.0.0.0 will resolve the same as localhost.

    You need to use a valid IP address or hostname in your connector configurations so that the cluster can form properly.

    Second, the DEBUG logging you provided indicates that multicast traffic isn't working between the two Docker instances. I recommend you try using static clustering to remove multicast problems from the environment. On broker 1 you can use something like:

    <connectors>
         <connector name="netty-connector">tcp://broker1:61616</connector>
         <connector name="broker2-connector">tcp://broker2:61617</connector>
    </connectors>
    
    ...
    
    <cluster-connections>
       <cluster-connection name="my-cluster">
          <connector-ref>netty-connector</connector-ref>
          <static-connectors>
             <connector-ref>broker2-connector</connector-ref>
          </static-connectors>
       </cluster-connection>
    </cluster-connections>
    

    On broker 2 you can use something like:

    <connectors>
         <connector name="netty-connector">tcp://broker2:61617</connector>
         <connector name="broker1-connector">tcp://broker1:61616</connector>
    </connectors>
    
    ...
    
    <cluster-connections>
       <cluster-connection name="my-cluster">
          <connector-ref>netty-connector</connector-ref>
          <static-connectors>
             <connector-ref>broker1-connector</connector-ref>
          </static-connectors>
       </cluster-connection>
    </cluster-connections>
    

    Of course you'll need to use the IP address or hostnames for the actual nodes in the connectors.