dockerdiskspacedevice-mapper

Why is docker image eating up my disk space that is not used by docker


I have setup docker and I have used completely different block device to store docker's system data:

[root@blink1 /]# cat /etc/sysconfig/docker
# /etc/sysconfig/docker

other_args="-H tcp://0.0.0.0:9367 -H unix:///var/run/docker.sock -g /disk1/docker"

Note that /disk/1 is using a completely different hard drive /dev/xvdi

Filesystem      Size  Used Avail Use% Mounted on
/dev/xvda1      7.8G  5.1G  2.6G  67% /
devtmpfs        1.9G  108K  1.9G   1% /dev
tmpfs           1.9G     0  1.9G   0% /dev/shm
/dev/xvdi        20G  5.3G   15G  27% /disk1
/dev/dm-1       9.8G  1.7G  7.6G  18% /disk1/docker/devicemapper/mnt/bb6c540bae25aaf01aedf56ff61ffed8c6ae41aa9bd06122d440c6053e3486bf
/dev/dm-2       9.8G  1.7G  7.7G  18% /disk1/docker/devicemapper/mnt/c85f756c59a5e1d260c3cdb473f3f4d9e55ac568967abe190eeaf9c4087afeac

The problem is that when I continue download docker images and run docker containers, it seems that the other hard drive /dev/xvda1 is also used up.

I can verify this problem by remove some docker images. After I removed some docker images, /dev/xvda1 has some more extra space now.

Am I missing something?

My docker version:

[root@blink1 /]# docker info
Containers: 2
Images: 42
Storage Driver: devicemapper
 Pool Name: docker-202:1-275421-pool
 Pool Blocksize: 64 Kb
 Data file: /disk1/docker/devicemapper/devicemapper/data
 Metadata file: /disk1/docker/devicemapper/devicemapper/metadata
 Data Space Used: 3054.4 Mb
 Data Space Total: 102400.0 Mb
 Metadata Space Used: 4.7 Mb
 Metadata Space Total: 2048.0 Mb
Execution Driver: native-0.2
Kernel Version: 3.14.20-20.44.amzn1.x86_64
Operating System: Amazon Linux AMI 2014.09

Solution

  • Note the this answer is about how to recover space when Docker has lost track of it and so no docker command will work. If you're instead just wondering how to recover space that is currently in use by docker then see "How to remove old and unused Docker images [and containers]"

    It's a kernel problem with devicemapper, which affects the RedHat family of OS (RedHat, Fedora, CentOS, and Amazon Linux). Deleted containers don't free up mapped disk space. This means that on the affected OSs you'll slowly run out of space as you start and restart containers.

    The Docker project is aware of this, and the kernel is supposedly fixed in upstream (https://github.com/docker/docker/issues/3182).

    A work-around of sorts is to give Docker its own volume to write to ("When Docker eats up you disk space"). This doesn't actually stop it from eating space, just from taking down other parts of your system after it does.

    My solution was to uninstall docker, then delete all its files, then reinstall:

    sudo yum remove docker
    sudo rm -rf /var/lib/docker
    sudo yum install docker
    

    This got my space back, but it's not much different than just launching a replacement instance. I have not found a nicer solution.