setting max log file size...

Hello all!

I have found a new home, this place is great!

I have been searching for days to find a way to set a max size for a log.txt file using a cron job exicuting a shell script. Is it possible for a script to remove older entries in a log file to maintain a limited file size? If so, how?

Thanks very much for you time!

It would be nice if unix supported circular files. On old HP3000, I could build a file of desired size, specify cir attribute, and output to the file would be circular.

The following downsizes log.txt back to 800 lines after it hits 1200 lines, but you could use "wc -c" to control on number of characters instead. I use tail +nnn logic because tail -nnn will go back only so far.

Of course, do this when the file is not in use.

#!/bin/sh

wc -l log.txt | read lcnt other

if [ $lcnt -gt 1200 ] ; then
   ((start=$lcnt-799))
   echo 'downsizing ...'
   tail +$start log.txt > log.txtN
   mv log.txtN log.txt
fi

exit 0

Thanks very much Jimbo!

Is there no way of doing this via file size, not number of lines?

Also, how would go about editing the log files for multiple users from root?

I have really done a lot of searching with no luck. I am really glad I found this place!

Thanks again!

What about a crontab to find large log files and erase them?

I am currently using the following crontab to email me when a log file is larger then 2000:

00 00 * * * /usr/bin/find /home/user/www/cgi -type f -name "log.txt" -size +2000 | mail me@here.com -s "Log Finder"

How do I use cp /dev/null or something in the crontab to just erase(not delete) the logs instead of emailing me?

Thanks again!

My script above is to get rid of the older portion of large files while maintaining the newer portion. It could downsize when a file hits a certain number of characters instead of number of lines. Both the wc and tail commands support number of characters.

But if you just want to empty the large files, that's a lot easier. The easiest way to empty a file and leave it there is:

> myfile

and if running as root, ownership and file permissions will remain unchanged. Normally you can use the -exec parameter on a find command to do some command on each qualifying file, such as:

find . -name "test*" -exec rm {} \;

but I was not able to get the redirection command above to work in this context. But there are several ways to feed the filenames, such as piping into xargs. I would suggest the following, but test it first by replacing the "> $fn" with "echo $fn":

#!/bin/sh
for fn in `find . -size 2000`
do
> $fn
done
exit 0

FreeBSD has a cool program to handle this called newsyslog. It's real easy to run it on other unix versions. You can get it via ftp from here.

And don't let the name fool you, it's not just for syslog files.