Hello all,
I need to write a script which has following requirement:
Need to read the filenames from text file and then search for the above read files in the required directory and if match found backup them in a backup folder.
And also need to compare and verify whether the files in the two directories are same.
(need to verify time stamps and files sizes too.)
while read f x
do
if [ ! -f $f ]
then
echo No $f
continue
fi
if [ "$(cmp $f backup_place 2>&1)" = "" ]
then
echo Already same: $f
continue
fi
cp -p $f backup_place/$f
if [ $? = 0 ]
then
echo Backed up: $f
else
echo FATAL Backup error: $f
exit 1
fi
done < list
cksum is a nice way to detect different content. It prints size. ls and when available find -ls print permissions. It seems like some combination would do it. Do you care about directory size in bytes, or just new entries?
Sometimes it is cheaper and simpler to just hard code setting the group, owner, permissions unconditionally than to fetch and compare.
You might want to look into sticky group and sticky id directories, see man chmod.
Even copying files might be cheaper than comparing the innards, especially moving or hard linking files.
Trimming: should the directories only contail these files. Nicer architectures store the working and temp data files for runtime in a different subtree from the delivered code, so this is a simple question. I often teach that what you buy should be segregated from what you download and use as is should be segregated from what you write should be segregated from the data it manipulates, for each user where applicable. Then you can clean any subtree for an upgrade of that part of the system. Odd files can destabilize the product and invalidate the testing.