Each t2.micro
node should be able to run 4 pods according to this article and the command kubectl get nodes -o yaml | grep pods
output.
But I have two nodes and I can launch only 2 pods. 3rd pod gets stuck with the following error message.
Could it be the application using too much resource and as a result its not launching more pods? If that was the case it could indicate Insufficient CPU or memory
.
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 33s (x2 over 33s) default-scheduler 0/2 nodes are available: 2 Too many pods.
According to the AWS documentation IP addresses per network interface per instance type the t2.micro
only has 2
Network Interfaces and 2
IPv4 addresses per interface. So you are right, only 4 IP addresses.
But EKS deploys DaemonSets
for e.g. CoreDNS and kube-proxy, so some IP addresses on each node is already allocated.