I'm running a number of short-lived docker containers each of which does some memory-intensive batch processing. I'm looking for a way to find the peak memory usage each container hit while it was running. Knowing this will allow me to optimize the infrastructure I run these containers on for future runs.
One naive way to achieve this is redirecting the streaming output of docker stats
to some file: docker stats container_id > stats.log
. However, this requires running a process for each container and then sorting through very verbose logs to find the peak usage. I'm wondering if there's not an easier way.
If you are interested in the process with PID=1
inside the container, you can find the PID
this process has on the host and then use:
grep VmPeak /proc/$PID/status
Example with a mongo
container:
This container has a single process:
$ docker container exec -it mongo top -bn 1
top - 10:04:51 up 32 min, 0 users, load average: 0.36, 0.52, 0.55
Tasks: 2 total, 1 running, 1 sleeping, 0 stopped, 0 zombie
%Cpu(s): 6.1 us, 2.0 sy, 0.3 ni, 90.6 id, 0.7 wa, 0.0 hi, 0.3 si, 0.0 st
KiB Mem : 6103572 total, 2642744 free, 1352032 used, 2108796 buff/cache
KiB Swap: 1942896 total, 1942896 free, 0 used. 4277928 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
64 root 20 0 38624 3116 2724 R 6.7 0.1 0:00.89 top
1 mongodb 20 0 1094540 80100 35916 S 0.0 1.3 0:22.51 mongod
To get the PID
of this process from the host's perspective:
$ docker inspect -f '{{.State.Pid}}' mongo
2532
and finally:
$ grep VmPeak /proc/2532/status
VmPeak: 1094540 kB
Links: