delete files and folders older than 3 days

find /basedirectory -type f -mtime +3 >> /tmp/tempfile
find /basedirectory -type d -mtime +3 >> /tmp/tempfile
mailx -s "List of removed files and folders" myemail@domain.com < /tmp/te
mpfile
rm /tmp/tempfile
find /basedirectory -type f -mtime +3 -exec rm {} \;
find /basedirectory -type d -mtime +3 -exec rmdir {} \;

the script is not working perfectly: Some files older than 3 days are still there. Am I missing something??

thx

use -ctime instead of -mtime

actually, I am suspicious that I should use rm -r??? I think this is not related to mtime or ctime...

any hints pls?
thx

it can be related to time issue only...!!!

if ur files are created more than three days only and not modified then it is not going to work correctly....

Mate ideally use should use ctime and not mtime if your requirment is to find the files which are not created in last three day.

if you are deleting directory then you should be using -r which would act as a recursively.

cheers
rex

thx for clearing out the difference between ctime and mtime.

Regarding the direcoty removals, do you mean I should have these 2 entries:

#DELETE FILES
find /basedirectory -type f -mtime +3 -exec rm {} \;
#DELETE FOLDERS
find /basedirectory -type d -mtime +3 -exec rm -r {} \;

thx.

You can use them seperatly also ,

however better would be to combine them.

find /dir -ctime +3 -exec rm -rf {}\;

Cheers,

you cannot delete a directory using:

find /basedirectory -type d -mtime +3 -exec rm -r {} \;

use the "-Rf" option instead of "-r"

you can combine them:

find /basedirectory -ctime +3 -exec rm -Rf {} \;

many thanks

This is harder than it looks...

Using mtime is better while using ctime may not work. If a backup program has backed up the files, it will cause ctime to advance if it resets atime. If your backup program does reset atime in this manner, then recent atime values indicate that someone (other than the backup program) is reading the file and perhaps it should not be removed. So, assuming a reasonable backup policy, atime may be the best choice.

Now consider a directory structure like:
./a/b/c/datafile
If datafile is recently created while a/b/c was already existent, directories a and a/b are stable while a/b/c is being updated. Doing a "rm -rf ./a" just because ./a has not changed will forcibly remove ./a/b/c/datafile which, in this case, is a recent file. Using rmdir for directories solves that. But we must be prepared for the fact that ./a must be left alone even though if passes the -ctime test.

On the other hand, if ./a/b/c/datafile is all old stuff, removing datafile renders ./a/b/c recently changed. Building a complete list of removal candidates prior to removing anything solves that.

We need process ./a/b before we process ./a and this implies that we need -depth on the find statement.

We cannot divide the world into ordinary files and directories unless we really want the script to fail if we encounter a socket, fifo, special file, etc. Instead we need to think in terms of directories and non-directories.

So maybe something like this will get you closer, (but I have not tested it):

find /basedirectory -depth -atime +3 > /tmp/tempfile1
exec < /tmp/tempfile1
while read item ; do
        if [[ -d $item ]] ; then
                rmdir $item && echo $item >> /tmp/tempfile2
        else
                rm $item && echo $item >> /tmp/tempfile2
        fi
done