I have a zpool 'zoo' on mirrored 2 * 1TB drives:
[develop@silversurfer /mnt/zoo]$ zfs list -r zoo
NAME USED AVAIL REFER MOUNTPOINT
zoo 829G 69.9G 996M /mnt/zoo
zoo/beyond 807G 69.9G 114G /mnt/zoo/beyond
zoo/officetemplates 48.8M 69.9G 46.2M /mnt/zoo/officetemplates
zoo/overflow 152K 89.9G 96K /mnt/zoo/overflow
'overflow' is an empty dataset with (now) minimum size 20 GB. The other datasets do not have fixed sizes or quotas. I encountered two disk-full errors in the past weeks which I solved through shrinking 'overflow'.
As you see the zpool allocates 829 GB but its contents only add up to 117 GB.
'du' confirms the smaller size:
[develop@silversurfer /mnt/zoo]$ ls -al
total 75
drwxrwxrwx 7 root wheel 6 Nov 18 14:46 .
drwxr-xr-x 5 root wheel 5 Nov 12 2021 ..
dr-xr-xr-x+ 3 root wheel 3 Sep 3 2019 .zfs
drwxrwxr-x 7 develop 2B 6 Nov 17 2021 beyond
drwxrwx--- 6 www 2B 75 May 23 17:30 officetemplates
drwxrwxrwx 2 root wheel 2 Nov 1 10:32 overflow
drwxrwxr-x 4 popeye 2B 8 Nov 13 10:37 scans
[develop@silversurfer /mnt/zoo]$ sudo du -hs /mnt/zoo
117G /mnt/zoo
[develop@silversurfer /mnt/zoo]$ sudo du -hs /mnt/zoo/.zfs
4.9G /mnt/zoo/.zfs
[develop@silversurfer /mnt/zoo]$ sudo du -hs /mnt/zoo/beyond
116G /mnt/zoo/beyond
[develop@silversurfer /mnt/zoo]$ sudo du -hs /mnt/zoo/officetemplates
45M /mnt/zoo/officetemplates
[develop@silversurfer /mnt/zoo]$ sudo du -hs /mnt/zoo/scans
994M /mnt/zoo/scans
[develop@silversurfer /mnt/zoo]$ sudo du -hs /mnt/zoo/overflow
512B /mnt/zoo/overflow
What is eating up the space between reported 829 GB and 117 GB? How can I reclaim this space?
Edit: it may be noteworthy that the pool is used for hosting four FreeBSD (12.1) jails. Also, please note that the question is not directed at differences between different ways of determining used/available space but at space differences in one and the same command.
I finally found an answer to this problem. The size difference came from snapshots. Use the -o space
flag:
[develop@silversurfer ~]$ zfs list -r -o space zoo
NAME AVAIL USED USEDSNAP USEDDS USEDREFRESERV USEDCHILD
zoo 82.2G 817G 196K 996M 0 816G
zoo/beyond 82.2G 795G 681G 114G 0 0
zoo/officetemplates 82.2G 48.8M 2.60M 46.2M 0 0
zoo/overflow 102G 152K 56K 96K 0 0
Wow, 681 G of snapshot space for payload data of 117 G!
List the snapshots and consumed space for each:
zfs list -r -o space -t snapshot zoo
It turned out there were 1000+ snapshots, automatically taken over a period of more than three years. A snapshot may use little space but sizes do add up.
I piped the snapshot names into a file and, after checking and editing the file, used it for batch-destroying snapshots:
zfs list -r -o name -t snapshot zoo > myfile
for x in `cat myfile`; do echo $x; sudo zfs destroy $x; done
Note that looping over a sudo
command is generally asking for trouble. Use at your own risk.
While the script is running I can see the disk usage level slowly drop:
[develop@silversurfer ~]$ zfs list -r -o space zoo
NAME AVAIL USED USEDSNAP USEDDS USEDREFRESERV USEDCHILD
zoo 680G 219G 196K 996M 0 218G
zoo/beyond 680G 198G 84.1G 114G 0 0
zoo/officetemplates 680G 48.8M 2.60M 46.2M 0 0
zoo/overflow 700G 152K 56K 96K 0 0
Destroying ~1000 snapshots took me about 10 minutes, your mileage may vary.
Hope this helps anybody out there facing a mysteriously filled disk.