I've been reading documentation up and down all day and I can't seem to get this to work.
I have an unruly application that opens a connection for every HTTP request. I would like to improve performance by forcing HTTP multiplexing over long lived TCP connections.
I tried making a ServiceEntry and a DestinationRule, but I didn't see that have any effect. I still saw a large number of TCP connections made.
I figured Istio would pool the connections via maxRequestsPerConnection
from DestinationRule. Is that wrong?
I imagined I needed:
In general though I would just like the Envoy side car to just pool and multiplex all egress connections for this service no matter the destination.
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: api
spec:
host: google.com
trafficPolicy:
connectionPool:
tcp:
maxConnections: 100
connectTimeout: 30ms
tcpKeepalive:
time: 7200s
interval: 75s
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: api
spec:
hosts:
- google.com
location: MESH_EXTERNAL
ports:
- number: 443
name: https
protocol: TLS
resolution: DNS
istioctl -i istio proxy-config cluster api.namespace
...
google.com 443 - outbound STRICT_DNS api.namespace
...
istioctl -i istio proxy-config cluster api.namespace --fqdn google.com -o json
"circuitBreakers": {
"thresholds": [
{
"maxConnections": 100,
"maxPendingRequests": 4294967295,
"maxRequests": 4294967295,
"maxRetries": 4294967295,
"trackRemaining": true
}
]
},
"upstreamConnectionOptions": {
"tcpKeepalive": {
"keepaliveTime": 7200,
"keepaliveInterval": 75
}
},
OK, so I think what I discovered above in my question is that maxConnections
does not cause the Envoy Proxy to make a pool with 100 connections. Instead this sets a circuit breaking rule that prevents the application from opening more connections.
Perhaps I need to use an HTTP Connection pool?