Is there anyway to delete a directory that has more than 200mil files?

I have a directory in my server which has grown incredibly huge with high
volume of files. The files were output files generated by the print server.
I'm having problem with removing the files or even the entire directory,
the CPU rate will hit peak whenever I execute the 'rm' command to remove
the files. I also can't do a 'du -sk' of the dir bcos it just hang.

Below is the status of the file when a do a 'ls' from the parent directory :

drwxr-sr-x 2 appltest oradba 204728832 May 30 15:10 log_to_removed

Can anyone help to advise what is the best way to remove or scrape this directory which is taking up space in my server?

Thks,
Patrina

use find to list the files to stdout and then use xargs to execute rm in small chunks....

find . -print | xargs -n 20 rm

Will delete all the files in the current directory in gulps of 20 at a time.

I just executed the command you gave me, am I supposed to see the listing of files of the stdout? The cursor just blink on the screen, how to confirm that the rm is working?

It's not supposed to print anything on the screen, it's spending all it's time doing it's job. :slight_smile:

use "top" or "ps" to look at process status

You should see "find" and "xargs" working like mad and "rm" with it's pid counting up as each "rm" is run with a group of 20 files.

Also if you do an "ls -ld" of the parent directory you should see the inode reference count dropping as each file is deleted.

Note: You should delete not only the files but also delete and recreate the directory.