Shell script to copy a log file if it exceeds 5000000 bytes

Hi,

on unix box, under /local/home/userid/logs folder, apps generated the following files

sw_warn.log
sw_error.log
eaijava.log

if there any application specific errors, the above file will keep growing.

if the file exceed 5000000 bytes, I would like to have a shell script which do the following

1) copy logs files from /local/home/userid/logs into /local/home/userid/logs/yyyy_mm_dd_timestamp.
2) Shell script should create a folder yyyy_mm_dd_timestamp.
3) Also, delete the files from /local/home/userid/logs
4) if deleting files are not recommended then delete the contents.

Thanks

You have four requests, but do not show any work or even an attempt at solving this. We like to help, not write your code.

3 Likes

We don't know your system.
Are the files open by an application at the time of the log file maintenance slot?
(If so, don't delete them, null them!).

Please show sample directory lists of every directory listed.

What is the maximum growth rate of the files? This determines how often to check.

Are there any disc space issues?

If you were to do this job manually on a given date, what would you type?

so far i have come up with the following

TODAY=`date +%d/%m/%y`
CURTIME=`date +%H:%M:%S`
 
#go to that directory
cd /local/home/userid/logs
 
#check the file size
ls -ltr sw_warn.log > sw_warn_filesize.txt
export file_size=`awk -F" " '{ print $5 }' sw_warn_filesize.txt`
 
# if greater than 5000000, then create a directory and copy the file. 
if [[ $file_size > 5000000 ]]
then
today_dir=mkdir TODAY_CURTIME
cp sw_warn.log /local/home/userid/logs/today_dir
 
#Instead of deleting the file, how to null sw_warn.log file
???

#remove this file
rm sw_warn_filesize.txt
else 
exit 0
fi

-------------------------------------------

We don't know your system.
Are the files open by an application at the time of the log file maintenance slot?

[me] yes these files are open by the apps
(If so, don't delete them, null them!).
[me] how?

Please show sample directory lists of every directory listed.
[me] the directory path is /local/home/userid/logs. under logs directory, i have three files
sw_warn.log
sw_error.log
eaijava.log

What is the maximum growth rate of the files? This determines how often to check.

[me] - i don't know the growth rate. btw, how to scheudle the shell script which keep checking the file size and run the script

Are there any disc space issues?
[me] no disc space issue.

e.g. To null an open file called "logfile" (after copying it of course) using a Shell which is running as a user with sufficient permissions.

>logfile

On the subject of scheduling, use unix "cron" to schedule routine maintenance tasks. The maintenance frequency depends on the growth rate and local rules.

Most Systems Administrators schedule logfile maintenance weekly on a Sunday night.
Personally I schedule such jobs for 12:00 (lunchtime) daily which gives you a chance to read the overnight logs before your second coffee.
Imho, keeping log files down to a readable size (albeit backed with many days of history) can save you when you need information about a hot problem in a hurry.

Hmm. By this time tomorrow you will know the growth rate ... won't you?

It is not advisible to use characters such as colons and solidi in filenames. It can (and does) confuse multi-platform backup software as well as making subsequent processing in Shell painful. Unix will not stop you creating such filenames because they are valid. The solidus will give you the most grief becuse it is the directory delimiter in the full hierarcial pathname to a file.

This line does not work - there are no variable names preceded with a dollar sign and the "mkdir" command never runs.

I'll leave that one for you to work out.

My proposed solution:

mark=$(date +%Y%m%d_%H%M%S)

cd /local/home

find */logs -maxdepth 1 \
            -type f \
            -size +5000000c \
            -name 'sw_warn.log' -o -name 'sw_error.log' -o -name 'eaijava.log'
            -print \
| while read file
do
    dir=$(dirname ${file})/${mark}

    if mkdir -p "${dir}"; then
        if cp -ip "${file} "${dir}"; then
            > "${file}"
        else
            echo "${file}": unable to copy to "${dir}" 1>&2
        fi
    else
        echo "${dir}": unable to create directory 1>&2
    fi
done

To be honest, if you are running the script once a day or week, then:

mark=$(date +%Y%m%d)

or even:

mark=$(date +%Y/%m%d)

would be more than sufficient.
But I have to admit the statement:

bothers me. There are always disk space issues -- disks may be cheap, but no one has infinite disk space. IMHO, rotating the logfiles is "better", for it limits the total size of the logs to the number of rotations times 5M (plus or minus). Change the body of the while loop to:

    p=9

    for i in 8 7 6 5 4 3 2 1 0; do
        if [ -f "${file}.${i}" ]; then
            mv "${file}.${i}" "${file}.${p}"
        fi

        p="${i}"
    done

    if cp -ip "${file} "${file}.0"; then
        > "${file}"
    else
        echo "${file}": unable to rotate 1>&2
    fi

Run this version of the script once a week, and you will retain a minimum of ten weeks of logs.

man cronolog : http://cronolog.org/

1 Like