I have an EC2 instance running docker on Ubuntu 24.04, with a 30Gb EBS (gp3) root volume.
I see mkdtemp: private socket dir: No space left on device
when I log into it because the container has done too much logging and eaten all the disk space.
df -h
says this:
Filesystem Size Used Avail Use% Mounted on
/dev/root 29G 29G 0 100% /
tmpfs 479M 0 479M 0% /dev/shm
tmpfs 192M 18M 174M 10% /run
tmpfs 5.0M 0 5.0M 0% /run/lock
/dev/xvda16 881M 137M 683M 17% /boot
/dev/xvda15 105M 6.1M 99M 6% /boot/efi
tmpfs 96M 12K 96M 1% /run/user/1000
..and if I docker exec -it mycontainer /bin/bash
I get:
failed to create runc console socket: mkdir /tmp/pty3257931603: no space left on device: unknown
But.. when I do sudo du -cha --max-depth=1 /var/lib/docker | grep -E "M|G"
I see:
3.4M /var/lib/docker/buildkit
44G /var/lib/docker/overlay2
1.4M /var/lib/docker/image
44G /var/lib/docker
44G total
I need to increase the volume size, but first I need to understand what's going on with the du
output. Why does the container appear to be using 44Gb of a 30Gb volume?
Edit: I think this was probably a case of the same data being counted twice per BMitch's answer - I don't know how docker's overlay data works, but inside the container it was showing ~22Gb used - half of the size reported by du
.
The du
command is very different from df
. The du
command looks at individual files while the df
command reports of the underlying filesystem. Possible discrepancies between the two include:
df
which can allow contents from another filesystem to be counted