I'm extending some tests for a system where Infinispan with JGroups is used for clustering. During the test execution, the application is mostly booted up and therefore JGroups is building a cluster. This leads to clustering of Jenkins builds in case of parallel builds, which produces side effects that fail the tests randomly.
Sadly I cannot change much of the configuration, however I can provide a new infinispan.xml and a new jgroups.xml file for test execution. I fiddled a lot with these config files trying to purposely ruin clustering by binding to random ports, disabling loopback etc., but JGroups is just too robust to fail.
I'm searching for a config which makes sure that a lookup of new nodes fails, even though the cluster name is the same for multiple parallel processes.
My configs look like this (I can change pretty much everything except the transport tag):
infinispan.xml:
<?xml version="1.0" encoding="UTF-8"?>
<infinispan xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="urn:infinispan:config:11.0 http://www.infinispan.org/schemas/infinispan-config-11.0.xsd"
xmlns="urn:infinispan:config:11.0">
<jgroups>
<stack-file name="jgroups" path="jgroups-test.xml"/>
</jgroups>
<cache-container statistics="true" default-cache="local-cache" >
<jmx enabled="true" />
<!-- Sadly I cannot remove transport altogether since the application code always requires a cluster -->
<transport cluster="mycluster"/>
<local-cache-configuration name="__vertx.distributed.cache.configuration"/>
<!-- a lot of local caches here which are distributed in the main infinispan.xml... -->
</cache-container>
</infinispan>
jgroups-test.xml:
<config xmlns="urn:org:jgroups"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="urn:org:jgroups http://www.jgroups.org/schema/jgroups-4.2.xsd">
<UDP bind_addr="match-address"
bind_port="0"
receive_on_all_interfaces="false"
receive_interfaces="255.255.255.255"
disable_loopback="true"
enable_diagnostics="false"/>
<TCP bind_addr="match-address"
bind_port="0"
enable_diagnostics="false"
client_bind_port="0"
receive_on_all_interfaces="false"/>
</config>
Any help on this curious matter is highly appreciated :)
The problem is that you're missing the stack
attribute on your transport
element, so you're still using the udp
stack bundled with Infinispan.
<transport cluster="mycluster" stack="jgroups"/>
However, you have by now removed enough protocols from the stack that I don't think it's usable as is.
The best option to avoid interfering with existing clusters is to use only TCP
or TCP_NIO2
as a transport (no UDP
) and LOCAL_PING
as the discovery protocol (no PING
or MPING
).
For reference, this is what the Infinispan test suite uses:
https://github.com/infinispan/infinispan/blob/main/core/src/test/resources/stacks/tcp.xml
You can make UDP
work as well if you set a different mcast_addr
for each cluster and/or you disable ip_mcast
, but it's much easier to go with TCP
and LOCAL_PING
.