I'm trying to find the largest files on my 25GB Linux server which has been steadily running out of space and is now 99.5% full. I assumed it was log files since I wasn't doing anything with the sites, and the database sizes are small and static.
Log files were a 100MB or so, nothing major.
I've tried the command found here (https://www.cyberciti.biz/faq/linux-find-largest-file-in-directory-recursively-using-find-du/) to recursively find the biggest files but its not giving me anything useful:
root@127:~# du -a / | sort -n -r | head -n 20
du: cannot access '/proc/12377/task/12377/fd/4': No such file or directory
du: cannot access '/proc/12377/task/12377/fdinfo/4': No such file or directory
du: cannot access '/proc/12377/fd/3': No such file or directory
du: cannot access '/proc/12377/fdinfo/3': No such file or directory
sort: write failed: /tmp/sortnI7YzR: No space left on device
I'm a Linux novice so would appreciate any help.
You need not to search in /proc and /dev as they are 'virtual' files thus nothing useful to look for there (just a huge loss of time)
As you seem to look for standard files, I would suggest to use find
find / \( -path /proc -prune -a -path /dev -prune \) -o -type f -size +100M -exec ls -s1 {} \; 2>/dev/null| sort -n -r | head -n 20
Here you may see that I use option -size +100M
that tells find to look for files larger than 100M assuming you are looking for big files. You may remove this option but it will be much longer.