While trying to index around 20000 records (each record has around 100kb size) using python's bulk API, I saw that max write queue size consumed is 6. So are my following understanding about bulk indexing correct (with default configuration of 500 chunks and 100 mb chunk size)?
write
thread pool. If several bulk operations arrive at the same time, then some of them will be handled (depending on the number of cores on the node receiving the bulk operation) and the ones that can't be handled will be added to the queue, whose size is 10000 (which means that 10000 indexing operations can be queued before being processed). Once that queue is filled, indexing operations will start to be rejected (HTTP 429)