I have been searching for days to find a way to set a max size for a log.txt file using a cron job exicuting a shell script. Is it possible for a script to remove older entries in a log file to maintain a limited file size? If so, how?
It would be nice if unix supported circular files. On old HP3000, I could build a file of desired size, specify cir attribute, and output to the file would be circular.
The following downsizes log.txt back to 800 lines after it hits 1200 lines, but you could use "wc -c" to control on number of characters instead. I use tail +nnn logic because tail -nnn will go back only so far.
Of course, do this when the file is not in use.
#!/bin/sh
wc -l log.txt | read lcnt other
if [ $lcnt -gt 1200 ] ; then
((start=$lcnt-799))
echo 'downsizing ...'
tail +$start log.txt > log.txtN
mv log.txtN log.txt
fi
exit 0
My script above is to get rid of the older portion of large files while maintaining the newer portion. It could downsize when a file hits a certain number of characters instead of number of lines. Both the wc and tail commands support number of characters.
But if you just want to empty the large files, that's a lot easier. The easiest way to empty a file and leave it there is:
> myfile
and if running as root, ownership and file permissions will remain unchanged. Normally you can use the -exec parameter on a find command to do some command on each qualifying file, such as:
find . -name "test*" -exec rm {} \;
but I was not able to get the redirection command above to work in this context. But there are several ways to feed the filenames, such as piping into xargs. I would suggest the following, but test it first by replacing the "> $fn" with "echo $fn":
#!/bin/sh
for fn in `find . -size 2000`
do
> $fn
done
exit 0