apache-kafkakraft

Kraft cluster configuration ERROR Could not find a 'KafkaServer' or 'controller.KafkaServer' entry in the JAAS configuration


Please forgive the wall of text and bear with me as I am new to Apache Kafka. I chose to adopt Kraft as I see zookeeper is being deprecated even if guides for it are more available. I've read/watched a lot of documentation, guides and tutorials to try to understand the components as I will likely be the sole maintainer of this for the near future.

I am attempting to setup a cluster configuration with 3 nodes for controllers and 3 nodes for brokers as hybrid is currently not recommended. My final goal is an encrypted, secure and ACL managed cluster which seems to be a pretty standard use case.

Reference Info:

With the current configuration I am getting the error below and I feel I'm missing something obvious.

[2024-01-27 12:26:20,301] ERROR Encountered fatal fault: caught exception (org.apache.kafka.server.fault.ProcessTerminatingFaultHandler)
java.lang.IllegalArgumentException: Could not find a 'KafkaServer' or 'controller.KafkaServer' entry in the JAAS configuration. System property 'java.security.auth.login.config' is not set
        at org.apache.kafka.common.security.JaasContext.defaultContext(JaasContext.java:150)
        at org.apache.kafka.common.security.JaasContext.load(JaasContext.java:103)
        at org.apache.kafka.common.security.JaasContext.loadServerContext(JaasContext.java:74)
        at org.apache.kafka.common.network.ChannelBuilders.create(ChannelBuilders.java:143)
        at org.apache.kafka.common.network.ChannelBuilders.serverChannelBuilder(ChannelBuilders.java:107)
        at kafka.network.Processor.<init>(SocketServer.scala:973)
        at kafka.network.Acceptor.newProcessor(SocketServer.scala:879)
        at kafka.network.Acceptor.$anonfun$addProcessors$1(SocketServer.scala:849)
        at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:190)
        at kafka.network.Acceptor.addProcessors(SocketServer.scala:848)
        at kafka.network.DataPlaneAcceptor.configure(SocketServer.scala:523)
        at kafka.network.SocketServer.createDataPlaneAcceptorAndProcessors(SocketServer.scala:251)
        at kafka.network.SocketServer.$anonfun$new$29(SocketServer.scala:172)
        at kafka.network.SocketServer.$anonfun$new$29$adapted(SocketServer.scala:172)
        at scala.collection.IterableOnceOps.foreach(IterableOnce.scala:575)
        at scala.collection.IterableOnceOps.foreach$(IterableOnce.scala:573)
        at scala.collection.AbstractIterable.foreach(Iterable.scala:933)
        at kafka.network.SocketServer.<init>(SocketServer.scala:172)
        at kafka.server.ControllerServer.startup(ControllerServer.scala:188)
        at kafka.server.KafkaRaftServer.$anonfun$startup$1(KafkaRaftServer.scala:95)
        at kafka.server.KafkaRaftServer.$anonfun$startup$1$adapted(KafkaRaftServer.scala:95)
        at scala.Option.foreach(Option.scala:437)
        at kafka.server.KafkaRaftServer.startup(KafkaRaftServer.scala:95)
        at kafka.Kafka$.main(Kafka.scala:113)
        at kafka.Kafka.main(Kafka.scala)

I was able to move from a default/plaintext setup to TLS encryption. Testing from an off network device using kafka-python worked and I could see data by watching the topic from any of the brokers via bin/kafka-consumer.sh.

Working TLS enabled ./config/kraft/controller.properties (trimmed out irrelevant settings but can provide if desired)

process.roles=controller
node.id=1
controller.quorum.voters=1@kafcon01-1.internal.local:9092,2@kafcon01-2.internal.local:9092,3@kafcon01-3.internal.local:9092
listeners=CONTROLLER://kafcon01-1.internal.local:9092
controller.listener.names=CONTROLLER
listener.security.protocol.map=CONTROLLER:PLAINTEXT,BROKER:PLAINTEXT,CLIENT:SSL
ssl.keystore.type=PEM
ssl.keystore.location=/opt/kafka/ssl/fqdn_fullstack.pem
ssl.client.auth=requested
ssl.endpoint.identification.algorithm=

Working TLS enabled ./config/kraft/broker.properties

process.roles=broker
node.id=4
controller.quorum.voters=1@kafcon01-1.internal.local:9092,2@kafcon01-2.internal.local:9092,3@kafcon01-3.internal.local:9092
listeners=BROKER://kafbrk01-4.internal.local:9092,CLIENT://kafbrk01-4.fqdn:9093
inter.broker.listener.name=BROKER
controller.listener.names=CONTROLLER
listener.security.protocol.map=CONTROLLER:PLAINTEXT,BROKER:PLAINTEXT,CLIENT:SSL
ssl.keystore.type=PEM
ssl.keystore.location=/opt/kafka/ssl/fqdn_fullstack.pem
ssl.client.auth=requested
ssl.endpoint.identification.algorithm=

To enabled the security portion of configuration, I updated both the controller and broker role configurations with the following changes/additions.

listener.security.protocol.map=CONTROLLER:SASL_PLAINTEXT,BROKER:SASL_PLAINTEXT,CLIENT:SASL_SSL
sasl.enabled.mechanisms=SCRAM-SHA-512
sasl.mechanism.inter.broker.protocol=SCRAM-SHA-512
sasl.mechanism.controller.protocol=SCRAM-SHA-512
super.users=User:admin
authorizer.class.name=org.apache.kafka.common.security.scram.ScramLoginModule
listener.name.sasl_plaintext.scram-sha-512.sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required \
    username="admin" \
    password="admin-secret";
listener.name.sasl_ssl.scram-sha-512.sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required;

Then, before starting services, deleted and reformatted the data directory to created a user 'admin'.

/opt/kafka/bin/kafka-storage.sh format --cluster-id <cluserid> --config /opt/kafka/config/kraft/controller.properties --add-scram "SCRAM-SHA-512=[name=admin,password=admin-secret"

Starting the controller node 1 resulted in the previous error.

Any guidance would be appreciates. Looking to get this up and running first and foremost but appreciate any suggestions for improvements or 'better way' approaches.


Solution

  • Hate to answer my own question but posting here in case it helps someone else. It turns out that I was misunderstanding the naming context for the listener.name in the configuration.This piece of the documentation was something I must have missed originally or left over from other trial and error attempts.

    Brokers may also configure JAAS using the broker configuration property sasl.jaas.config. The property name must be prefixed with the listener prefix including the SASL mechanism, i.e. listener.name.{listenerName}.{saslMechanism}.sasl.jaas.config. Only one login module may be specified in the config value. If multiple mechanisms are configured on a listener, configs must be provided for each mechanism using the listener and mechanism prefix.

    I originally had used the security/encryption value in the key:value pair identified by the mapping:

    listener.security.protocol.map=CONTROLLER:SASL_PLAINTEXT,BROKER:SASL_PLAINTEXT,CLIENT:SASL_SSL
    

    So where originally I had:

    listener.name.sasl_plaintext.scram-sha-512.sasl.jaas.config
    

    Which should have been:

    listener.name.controller.scram-sha-512.sasl.jaas.config
    

    I updated for the BROKER and CLIENT listeners as well. Afterwards, the service started without the original error.

    I encountered a new error about authentication failures upon starting the next controller in the cluster but I will investigate and post a new question if necessary.