fetchmail - log file size limitation

Hi,

   I am using fetchmail in my application so as to download mails to the localhost where the application is hosted from the mailserver.Fetchmail is configured as as to run as a daemon polling mails during an interval of 1sec.

So my concern here is, during each 2sec it is writing two lines to fetchmail.log - the associated log file

So in longer run, the size of the log file will grow big which may create a space issue in the linux server since it is in shared environment.

I tried using some split,tail commands to limit the lines,but if i apply those logics,fetchmail stops writing to that log file, means interruption is not effective.

Can anyone please help me to sort this out ?

Regards
Dileep

How about creating a new log file every week ? The old logfiles can be archived in a tar file, if at all they are needed.

Another way is to stop the daemon , for 2 minutes till you chomp the log file, and restart it again.But I guess the first method is better.

Concept is fine.But what about creating a new logfile or archiving the old log files while the fetchmail is running in daemon and polling in every second?

This may be possible, but i have to write the code in a conditional block in the application script itself so as to stop the daemon and restart it again after doing the log archive operation....But i am looking for more efficient one, if you dont mind.

Read "Useful options for daemon mode" in the fetchmail manual page. Manual Page - fetchmail(man)

Thanks for the reference.So as it states the only way to furnish my requirement is to stop and restart fetchmail,right.

oh!Anybody having any other logical ways can let me know...Anyway thanks for all your repsonse

If you really want to avoid restarting the daemon, then your script can ask the daemon to point to some other temprary log file for a particular time of week like sunday 9.00 AM to Sunday 10.00 AM.

You can then archive or chomp the actual log files between this time. This can be done by another script , which can be put in cron.

But on a personal note, stopping and restaring the daemon is much eaiser, unless you are pretty sure that some important information might be lost during the timeframe, you would be refreshing the logfiles.

fine.I am also thinking about your second option, restarting the daemon.But may i know how you can furnish the first option-make the daemon point to temp file for a particular period?

Well answer to this is simple..!
Whenever your second script will run , it will create a lock file, say "lckfile"
Before you add statements to your log file , in the main script, you just need to check if "lckfile" exists.

If it exists , it means you are archiving or chomping your original log files and you need to append your log statements in the temp log file.

If it does not exists, you are safe to add your log statements in the orignal log files.

Hope this helps!

nua7

The scenario here is different,fetchmail daemon is writing to logfile and in every 2 sec.I can't control the fetchmail daemon process once it starts running.So this is not applicable in the scenario.

This is what is specified in the man page for fetchmail.

"The --syslog option (keyword: set syslog) allows you to redirect status and error messages emitted to the syslog(3) system daemon if available. Messages are logged with an id of fetchmail, the facility LOG_MAIL, and priorities LOG_ERR, LOG_ALERT or LOG_INFO. This option is intended for logging status and error messages which indicate the status of the daemon and the results while fetching mail from the server(s). Error messages for command line options and parsing the .fetchmailrc file are still written to stderr, or to the specified log file. The --nosyslog option turns off use of syslog(3), assuming it's turned on in the ~/.fetchmailrc file, or that the -L or --logfile <file> option was used."

But I think you should go with option 1 , of restarting the daemon.

ok.I am looking into it.Thanks everybody for your time and comments