I have a Docker container that reports this resource usage when running locally:
docker run -i --rm -p 8080:8080 my-application
As you can see the container uses 10.6MiB:
docker ps -q | xargs docker stats --no-stream
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
b73afe5ee771 mystifying_neumann 0.00% 10.6MiB / 7.777GiB 0.13% 11.7kB / 2.38kB 0B / 0B 21
Now I run that container in Openshift, setting the following memory limits:
resources:
limits:
memory: 64Mi
requests:
memory: 64Mi
When the pod starts I would be expecting ~11MiB used of a total of 64MiB. However the container is using 53MiB!!! Why this difference?
I finally found the reason of this difference in these two references:
https://github.com/openshift/origin-web-console/issues/1315
https://access.redhat.com/solutions/3639191
Summing up: Docker reports memory as the addition of several elements such as rss and cache:
https://docs.docker.com/config/containers/runmetrics/#metrics-from-cgroups-memory-cpu-block-io
cache The amount of memory used by the processes of this control group that can be associated precisely with a block on a block device. When you read from and write to files on disk, this amount increases. This is the case if you use “conventional” I/O (open, read, write syscalls) as well as mapped files (with mmap). It also accounts for the memory used by tmpfs mounts, though the reasons are unclear.
rss The amount of memory that doesn’t correspond to anything on disk: stacks, heaps, and anonymous memory maps.
Openshift 3.x reads that information using Heapster and can`t tell appart both types of memory.
If you check the docker stats of the container that is running inside Openshift you will find the expected (lower) value.