Intelligent scaning of log files

Dear experts,
I have a problem and I am not clear on how to attack this. Let me define the problem as simply as possible.

1)There are several log files in a directory
2) Script should open each log file and scan for errors (grep for certain strings - say - error1, error2, error3)
3) If any errors are found - email few lines above and below of that error
4) then at the end of the log file, append some unique string, say , @EMAILED-ERRORS@
5) Next day, we will start from the unique string @EMAILED-ERRORS@ for any new errors in the log files
The problem is how to reach to the last occurrence of @EMAILED-ERRORS@ in a file? Please do not tell me to use "tac" as it is not on the AIX server.
Can this be easily done with SHELL scripts?

Any ideas will be appreciated.
Thanks in advance
-new SW

Your requirement seems a bit strange to me, but here's most of what you need. Tested the tidbits on AIX 6.1 w/minimal Linux RPMs installed on it.

MailFile="/tmp/mail.tmp"
cp /dev/null ${MailFile}
for file in `find /tmp -type f -name 'foo*' -exec ls {} \;`; do
    [[ `cat ${file} | sed -n 'H; /^@EMAILED-ERRORS@/h; ${g;p;}' | egrep  -c "error1|error2|error3"` -eq 0 ]] && continue
    # Otherwise, we have a match.
    for line in `cat ${file} | sed -n 'H; /^@EMAILED-ERRORS@/h; ${g;p;}' | sed -n '/error[123]/{=;x;}`; do
        [[ $((${line}-5)) -lt 0 ]] && start=0 || start=$((${line}-5))
        [[ $((${line}+5)) -gt `wc -l $file|awk '{print $1}'` ]] && end=`wc -l $file | awk '{print $1}'` || end=$((${line}+5))
        echo "ERROR: Found error on file: ${file} line: ${line}!" >> ${MailFile}
        sed '${start},${end}!d' ${file} >> ${MailFile}
    done
    echo "@EMAILED-ERRORS@" >> ${file}
done
cat ${MailFile} | mail -s "Errors" joe@gmail.com

Should do it for you, not the cleanest, but it'll work.

Edit:

sed -n 'H; /^@EMAILED-ERRORS@/h; ${g;p;}'
  • Print from the last occurrence of our regex to EOF
sed '${start},${end}!d'

= 'Delete' everything that is not from line # $start to line # $end