apache-kafkaapache-zookeeperproducer-consumerapache-kafka-security

Can´t write to topic with remote producer in Apache Kafka


So I´m trying ot write to a topic in my cluster with a producer that´s on a seperate device than the broker and zookeeper servers. I´m using SSL_SASL authentication and I believe that there is an issue with the authentication but I dont get any log messages regarding this so I could be wrong. When I run this command:

bin/kafka-console-producer.sh --broker-list raspberrypi:9092 --topic my-topic --producer.config config/producer.properties

I get this repeating crash:

[2023-04-19 12:55:38,819] WARN [Producer clientId=console-producer] Connection to node -1 (raspberrypi/192.168.1.104:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient) [2023-04-19 12:55:38,819] WARN [Producer clientId=console-producer] Bootstrap broker raspberrypi:9092 (id: -1 rack: null) disconnected (org.apache.kafka.clients.NetworkClient)

But when I try to do the same command on the same machine as the broker it works just fine.

Here is my configuration

server.properties

broker.id=0
confluent.http.server.listeners=

listeners=SASL_SSL://raspberrypi:9092,SASL_SSL1://192.168.1.104:9093

advertised.listeners=SASL_SSL://raspberrypi:9092,SASL_SSL1://192.168.1.104:9093



listener.security.protocol.map=SASL_SSL:SASL_SSL,SASL_SSL1:SASL_SSL

zookeeper.connect=raspberrypi:2182


log.dirs=/tmp/data/broker-0
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
num.partitions=3
num.recovery.threads.per.data.dir=1
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1
log.retention.hours=168
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
zookeeper.connection.timeout.ms=18000
group.initial.rebalance.delay.ms=0

# Properties for SSL Zookeeper Security between Zookeeper and Broker

zookeeper.clientCnxnSocket=org.apache.zookeeper.ClientCnxnSocketNetty
zookeeper.ssl.client.enable=true
zookeeper.ssl.protocol=TLSv1.2

zookeeper.ssl.truststore.location=/home/pi/kafka_2.13-3.4.0/ssl/kafka.broker.truststore.jks
zookeeper.ssl.truststore.password=exjobb123
zookeeper.ssl.keystore.location=/home/pi/kafka_2.13-3.4.0/ssl/kafka.broker.keystore.jks
zookeeper.ssl.keystore.password=exjobb123

zookeeper.set.acl=true

# Properties for SSL Kafka Security between Broker and its clients

ssl.truststore.location=/home/pi/kafka_2.13-3.4.0/ssl/kafka.broker.truststore.jks
ssl.truststore.password=exjobb123
ssl.keystore.location=/home/pi/kafka_2.13-3.4.0/ssl/kafka.broker.keystore.jks
ssl.keystore.password=exjobb123
ssl.key.password=exjobb123
security.inter.broker.protocol=SASL_SSL
ssl.client.auth=required
ssl.protocol=TLSv1.2
#Properties for SASL between a broker and its client

sasl.enabled.mechanisms=SCRAM-SHA-512
sasl.mechanism.inter.broker.protocol=SCRAM-SHA-512
listener.name.sasl_ssl.scram-sha-512.sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username="kafka-admin" password="exjobb123";
listener.name.sasl_ssl1.scram-sha-512.sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username="kafka-admin" password="exjobb123";
super.users=User:kafka-admin

#Properties for Authorization
authorizer.class.name=kafka.security.authorizer.AclAuthorizer

producer.properties

bootstrap.servers=raspberrypi:9092
compression.type=none
security.protocol=SASL_SSL
ssl.protocol=TLSv1.2
ssl.truststore.location=/home/exjobb/ssl/kafka.producer.truststore.jks
ssl.truststore.password=exjobb123
sasl.mechanism=SCRAM-SHA-512
sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username="sasl-producer" password="exjobb123";

I tried to allow the ports 9092 and 9093 on the firewall to no avail, as I said I also tried running the producer on the same machine as the broker and then it worked fine. The logs for the broker and zookeeper doesnt say anything aswell.

Any help is much appricieted and let me know if you want any additional information that I might of missed.


Solution

  • This means to only accept requests from hostname raspberrypi on port 9092, which explains why it worked locally.

    listeners=...://raspberrypi:9092
    

    If you'd used 192.168.1.104:9093 as your broker list instead, it may have worked.

    You don't need both hostname and IP listed in listeners and/or advertised.listeners, but listeners is a bind address. Change it to 0.0.0.0:<port> to accept all external connections.

    Beyond that, it's unclear how you've allowed sasl-producer user to access the cluster. You'll need to run kafka-acls commands, I believe.

    Sidenote: Redpanda would be less resource intensive on a Raspberry pi, while still maintaining Kafka compatibility. You also don't technically require Zookeeper anymore.

    + Never store important data under /tmp