Purging 2000+ directories efficiently

Hi

I have a requirement wherein i need to purge some directories.

I have more than 2000 directories where i need to keep data for 10 days and delete the rest. What i am looking for is an efficient way to achieve this.

There are four mount points from where i need to delete the files.

Below is an example of the structure of the mount points.

/XXXX/XXX/XXXX/XXXX/XXXX/<AAAAA>/<aaaa>/duplicate
                          exception
                          archive

/XXXX/XXX/XXXX/XXXX/XXXX/custom/xxx/<aaa>/archive

/XXXX/XXX/XXXX/XXXX/XXXX/<format>/archive

/XXXX/XXX/XXXX/XXXX/

I am aware of the conditions that should be in place for the deletion to take place but since the number of directories is too much it would be helpful if anyone can suggest an efficient way of doing this.

All values within <> eg. <AAAAA> will be fetched using sql scripts.

The shell i am using is ksh.

Whats your current plan in mind for this?

How you get the mountpoints should be external to the cleanup command. Get sql to write the four full mountpoints to a file, called for this example

/tmp/foo.txt

or write a separate script to create the file. If you need help with that we a LOT more information.

LOGFILE=/path/to/logs/cleanup.log_$(date +%Y_%m_%d)
while read mpoint
do
    find $mpoint -type f -mtime +10  -exec rm {} \; 2&>1  >> $LOGFILE &
done  < /tmp/foo.txt
wait

This creates four parallel child processes (jobs), each cleaning up one mountpoint. It then waits until all four jobs finish.

Getting the mountpoints wont be a problem and can be hardcoded to a certain extent eg. In the path
/XXXX/XXX/XXXX/XXXX/XXXX/<AAAAA>/<aaaa>/duplicate ... /XXXX/XXX/XXXX/XXXX/XXXX/ part is generic and can be hardcoded.
Using the part /<AAAAA>/, /<aaaa>/ can be fetched using a sql query so we can get and traverse into /XXXX/XXX/XXXX/XXXX/XXXX/<AAAAA>/<aaaa> path . Now inside this there
are the following dirs where purging needs to be done.

/XXXX/XXX/XXXX/XXXX/XXXX/<AAAAA>/<aaaa>/duplicate
exception archive
There are 1000+ dirs of this kind (/XXXX/XXX/XXXX/XXXX/XXXX/<AAAAA>) with the same structure as above.

What i am looking for is a way to create parallel processes which can purge these dirs parallely.

A POINT TO REMEMBER IS THAT THE NUMBER OF DIRS IS NOT CONSTANT AND IS ALWAYS INCREASING so if there are 1200 (/XXXX/XXX/XXXX/XXXX/XXXX/<AAAAA>) kind of dirs

then we should not have 3 processes cleaning up 400 dirs each.