In Kafka, the many partitions of a topic would get stored on different brokers for better parallelism and throughput.
However, would a very large number of single-partition topics be similarly load balanced across the brokers in the cluster? I'm thinking, why would Kafka send a very large number of topics to the same broker -- I mean, it might if it were round-robinning the topics, always starting from Broker no. "1", but I have no idea that it is.
I have a situation where I cannot parallelize consumption of topics with the help of multiple consumers -- i.e. I can have only one consumer per topic.
Related questions: Is there a max limit to the number of topics in Kafka, single-partitioned or not?
In Kafka, partitioning of topics facilitates parallelism and throughput, with each partition stored on a single broker. Load balancing of partitions across brokers is managed by Kafka's partition assignment algorithms, ensuring even distribution and fault tolerance.
For a large number of single-partition topics, Kafka employs strategies like Range or Round Robin Partition Assignment to distribute partitions across brokers efficiently.
In scenarios where only one consumer per topic is feasible, managing numerous single-partition topics is viable but requires consideration of cluster capacity and workload characteristics.
While Kafka doesn't impose a hard limit on the number of topics, practical constraints such as metadata overhead and resource availability define the operational limit.
Continuous monitoring and scaling of the cluster are essential as the number of topics grows to maintain optimal performance.