javaapache-kafkaspring-cloud-stream-binder-kafka

Separate brokers for consumer and producer in Kafka using spring cloud stream


I have successfully implemented spring cloud stream and streaming using kafka. I am consuming and then publishing to a destination topic.

All is working fine but I want to separate the brokers for my producer and consumer. Current the properties are such that it reads from the same broker.

How to separate it ?

spring:
  cloud:
    config:
    function:
      definition: RiskProcessor1;RiskProcessor2;RiskProcessor3;RiskProcessor4;RiskProcessor5
    stream:
      bindings:
        RiskProcessor1-in-0:
          destination: ******
        RiskProcessor1-out-0:
          destination: *****
        RiskProcessor2-in-0:
          destination: *****
        RiskProcessor2-out-0:
          destination: *****
        RiskProcessor3-in-0:
          destination: *****
        RiskProcessor3-out-0:
          destination: *****
        RiskProcessor4-in-0:
          destination: *****
        RiskProcessor4-out-0:
          destination: *****
        RiskProcessor5-in-0:
          destination: *****
        RiskProcessor5-out-0:
          destination: *****
      kafka:
        streams:
          binder:
            brokers: kaas-int.nam.nsroot.net:9093
            functions:
              RiskProcessor1:
                applicationId: RiskProcessor1_development
              RiskProcessor2:
                applicationId: RiskProcessor2_development
              RiskProcessor3:
                applicationId: RiskProcessor3_development
              RiskProcessor4:
                applicationId: RiskProcessor4_development
              RiskProcessor5:
                applicationId: RiskProcessor5_development
            configuration:
              commit.interval.ms: 1000
              security.protocol: SSL
              default:
                deserialization:
                  exception:
                    handler: org.apache.kafka.streams.errors.LogAndContinueExceptionHandler
              schema:
                registry:
                  url: *****:9081
          default:
            consumer:
              keySerde: ****
              valueSerde: ****
            producer:
              keySerde: ****
              valueSerde: ****

Solution

  • You'd add more partitions to your topic(s).

    Kafka determines leaders of partitions based on its internal algorithms. Leaders can be other brokers of the same cluster. Clients do not have direct control over this, outside of defining a partitioner interface for where messages are delivered