Using mtime is better while using ctime may not work. If a backup program has backed up the files, it will cause ctime to advance if it resets atime. If your backup program does reset atime in this manner, then recent atime values indicate that someone (other than the backup program) is reading the file and perhaps it should not be removed. So, assuming a reasonable backup policy, atime may be the best choice.
Now consider a directory structure like:
./a/b/c/datafile
If datafile is recently created while a/b/c was already existent, directories a and a/b are stable while a/b/c is being updated. Doing a "rm -rf ./a" just because ./a has not changed will forcibly remove ./a/b/c/datafile which, in this case, is a recent file. Using rmdir for directories solves that. But we must be prepared for the fact that ./a must be left alone even though if passes the -ctime test.
On the other hand, if ./a/b/c/datafile is all old stuff, removing datafile renders ./a/b/c recently changed. Building a complete list of removal candidates prior to removing anything solves that.
We need process ./a/b before we process ./a and this implies that we need -depth on the find statement.
We cannot divide the world into ordinary files and directories unless we really want the script to fail if we encounter a socket, fifo, special file, etc. Instead we need to think in terms of directories and non-directories.
So maybe something like this will get you closer, (but I have not tested it):
find /basedirectory -depth -atime +3 > /tmp/tempfile1
exec < /tmp/tempfile1
while read item ; do
if [[ -d $item ]] ; then
rmdir $item && echo $item >> /tmp/tempfile2
else
rm $item && echo $item >> /tmp/tempfile2
fi
done