parse a log file and remember last line

Hi all:

I'm working on a HPUX 11.23 system and I am needing to parse a tomcat-jakarta log file for memory use. Getting the desired data is easy, assuming the log file does not grow. This file grows constantly and I want to check it q 5 min. The next check will pick up from where it left off 5 min ago and read only the new entries looking for the desired pattern. So, read and parse the desired data from the log file, 5 minutes later read the newer entries and parse the desired data. Needs to be a shell (ksh) script so I can pass it off to the users and let them maintain it.

I have seen lots of entries for log analysis and parsing but nothing is really fitting my needs - or I am missing something.

Many thanks

You can try something like that:

#!/bin/ksh
while :; do
temp=$(tail -1 /your/log/file)
awk "/$last/{p=1}p" /your/log/file #pipe output of that command to your parser
last=$temp
sleep 300
done

To test if it is working correctly, you can do something like:

#!/bin/ksh
while :; do
temp=$(tail -1 /your/log/file)
echo "============================================="
awk "/$last/{p=1}p" /your/log/file
last=$temp
sleep 300
done

Then new entries should be printed continously every 5 mins, separated from the old ones by "======".

If the last row is repeated then , it will print from first match

testfile

a
b
c
d

updated test file

a
b
c
d
e
f
c

again update test file

 
a
b
c
d
e
f
c
x
y
z

Then Result is

c
d
e
f
c
x
y
z

What is the largest possible size of the log?
What is the largest possible number of records in the log?

In proper log files one of the fields is a timestamp, which will prevent that kind of situation. I assumed that this is the case here.