node.jskubernetesnetwork-programmingtcpnetstat

How to find number of http(s) connections (TCP) opened by my node JS micro-service (using axios KeepAlive for http) config, in GKE k8s environment?


Problem Description/Context

I have a nodeJS-based application using Axios to make HTTP requests (Outbound REST API calls) against a web service (say https://any.example.restapis.com). And these HTTP requests occasionally used to take > 1-minute latency. After some debugging - when we tried httpsAgent property to keep the HTTP connections live (persistent) it did the trick and now the APIs are taking < 1 second and the application is working OK. Basically, my understanding is with this property the TCP connections used by the HTTP calls are persistent now and the httpsAgent is opening multiple socket connections against the web service (i.e; it's keeping the connections alive based on default configs and opening multiple TCP connections based on the load as required - basically maintaining a pool of connections)

httpsAgent: new https.Agent({ keepAlive: true }),

Question

We are not yet sending the full traffic 100% to the micro-service (just 1%). So I would like to understand in detail what is happening underneath to make sure the fix is indeed complete and my micro-service will scale to full traffic.

So, can anyone please let me know after SSH into the pod's container how I can check if my node JS application is indeed making number of TCP (socket) connections against the web service rather than just using single TCP connection but keeping it alive (I tried to use netstat -atp command like below - however I'm not able to make the connection). So, it will great if anyone help me with how to check the number of TCP connections made by my micro-service.

// example cmd -
// Looking at cmds like netstat, lsof as they may (hoping!) give me details that I want!  
netstat -atp | grep <my process ID>

Solution

  • Actually the code is working and indeed opening multiple TCP connections with axios config https Agent with keepAlive.

    The way I tested is:

    1. Ran the service under a lot of load
    2. used nslookup to find the IP address of the domain
    3. And ran the netstat command in the pod's container like below - with watch

    watch 'kubectl exec -it -n my-namespace my-pod -c pod-container -- netstat -atlpn | grep "18/node (processID)" | grep "IP address"'