xargs

I have dir with many files ( close to 4M ) .

$ ls -la
total 76392
drwxr-xr-x   10 oracle   dba             512 Jun 06 14:39 .
drwxr-xr-x   11 oracle   dba             512 Dec 20 13:21 ..
drwxr-xr-x    2 oracle   dba        39074816 Jun 15 14:07 ad

I am trying to delete them using command below , but, not deleting anything:

ls|xargs -L 1000 rm

how can we delete large no files on AIX ?

Thanks

Do you mean 4 million (4M) files? The performance on a directory with that many files will be terrible, to say the least. I'm surpised you did not run out of inodes....

There is nothing wrong with your command. It probably takes 60 seconds to locate one file name in the directory and delete it.

Try this, and check back on Monday.... assume your directory name is ad.

cd /path/to/ad
cd ..
rm -r ./ad
mkdir ./ad

You need to get rid of the directory as well. Then recreate it.

What version of AIX? It matters. Let's assume it's not a very new one.
Seeding the command in post #1 with ls was unlikely to work because ls always tries to sort the file list and this is likely to fail long before the xargs fails (because the xargs gives the Shell a line which is way too long in older AIX).

Assuming no subdirectories and an older AIX.
If the rm -r keels over with lack of memory or breaks the kernel, the penultimate last resort is:

find /path/to/ad -type f -print | while read filename
do
        rm "${filename}"
done

This will not be quick, but it will get there in the end.
Then do Jim's bit to delete and re-create the directory (taking good note of the original permissions).

Ps. I'm a bit amazed that you managed to count the number of files in this directory. I think that you are a contender for the largest directory file ever on a unix system which still works. 37Mb for a directory file is impressive.

@methyl, I don't think ls would fail, since ls is equivalent to ls -1 when output is not a terminal.

It still sorts them alphabetically whether it gathers them into columns or not.

1 Like

I was referring to the line length, but I see I misread methyl's post, which is referring to the length after xargs, my bad... :slight_smile:

1 Like

Sorry, I was editing my post on the fly (my bad) - depends when you read it!
I think that we have all hit this problem at some time or another. The ls on a huge directory hanging or crashing is a classic symptom.

I forgot to ask whether this directory was a free-standing filesystem - in which case there were more ruthless methods!

Exactly, I was thinking the same thing, then it would be trivial, or perhaps if that is not the case, get everything else off the filesystem, new file system, recreate the dir and move everything back?

@Scrutinizer
Have had to do that very action myself when a rogue huge directory could not be processed in any command. With a bit of care it is possible to copy the files and retain permissions etc. with multiple find commands (avoiding mentioning the rogue directory) piped to a custom cpio -p .
Invariably in this situation a tape backup is worthless, which makes the repair very urgent.