keycloakinfinispan

Keycloak HA: distributed cache check


I'm trying to test Keycloak HA infrastructure simulating it via docker-compose. My docker-compose has ha-proxy as load balancer, 2 keycloak nodes and external sql server for the shared keycloak db. The 2 nodes are configured to use distribute cache (infinispan) and jdbc_ping for node discovery. All seems working because the nodes added subscription row to jdbc_ping related table and checking the console logs I can see that the cluster discovery worked without errors.

My doubt is the following: with this kind of configuration if I login directly to keycloak nodeA, I create a session. If I try to login to keycloak nodeB with the same user, if the distributed cache is working well (it distributes also sessions), I suppose I should already be logged in sharing the same session? Am I wrong? Because at the moment if I try to login to nodeB, it automatically log me out from both nodes!

I'm doing this test only to understand if distributed cache is working well, also because all the requests in production will pass through load balancer.

How can I check if keycloak's distributed cache is working well?

Thanks in advance


Solution

  • you can ssh to one of the servers and check it via this (11222 is default port for infinispan management cli):

    echo describe|bin/cli.sh -c localhost:11222 -f -
    

    and it will have a result like the following (cluster_members_physical_addresses):

    {
    "name" : "default",
    "version" : "10.0.0-SNAPSHOT",
    "coordinator" : false,
    "cache_configuration_names" : [ "org.infinispan.REPL_ASYNC", "___protobuf_metadata",
    "org.infinispan.DIST_SYNC", "qcache", "org.infinispan.LOCAL", "dist_cache_01",
    "org.infinispan.INVALIDATION_SYNC", "org.infinispan.REPL_SYNC",
    "org.infinispan.SCATTERED_SYNC", "mycache", "org.infinispan.INVALIDATION_ASYNC",
    "mybatch", "org.infinispan.DIST_ASYNC" ],
    "cluster_name" : "cluster",
    "physical_addresses" : "[192.168.1.7:7800]",
    "coordinator_address" : "thundercat-34689",
    "cache_manager_status" : "RUNNING",
    "created_cache_count" : "4",
    "running_cache_count" : "4",
    "node_address" : "thundercat-47082",
    "cluster_members" : [ "thundercat-34689", "thundercat-47082" ],
    "cluster_members_physical_addresses" : [ "10.36.118.25:7801", "192.168.1.7:7800" ],
    "cluster_size" : 2,
    "defined_caches" : [ {
    "name" : "___protobuf_metadata",
    "started" : true
    }, {
    "name" : "mybatch",
    "started" : true
    } ]
    }