I created a network using Hyperledger Fabric and Docker. There are 3 orderers and 2 organizations in this network. I will test this network with Hyperledger Caliper. The network and Caliper are working properly. Everything is ready up to this point, but there is one thing that comes to mind. There are always examples on the internet with a single orderer. Does Hyperledger Fabric have an internal mechanism for load distribution on 3 orderers? As far as I know, there is not. So, if I want to do load-balancing with nginx, how can I set this up?
Using the Fabric Gateway client API (for Fabric v2.4 and later), the client only connects to a Gateway peer. Connections to endorsing peers and ordering service nodes as part of the transaction submit flow are managed by and made from the Gateway peer on behalf of the client. The documentation accompanying the full-stack-transfer-guide sample describes the flow and interactions between nodes. The Fabric Gateway architecture reference in the main Fabric documentation provides more details on the mechanics.
The Gateway service randomly selects ordering service nodes for each transaction invocation to distribute load across the ordering service. The consensus mechanism will also require communication between ordering service nodes.
Since connections to ordering service nodes are made from Gateway peers within the deployed network, and both the Gateway service and consensus mechanism result in distribution of work across ordering service nodes, I don't see any benefit to configuring load-balancing proxy access to ordering service nodes.
You might consider providing a load-balancing endpoint for client connections in front of the Gateway peers for a given organization, to provide fault tolerance and load balancing across Gateway peers. This is mentioned in the full-stack-transfer-guide documentation.