tail -f catalina.out |
while read line
do
case $line in
'Out of memory') printf "%s\n" "$line" |
mail -s "Tomcat out of memory" someone@example.com
;;
esac
done
Of course you can do some scripting like grep or perl yourself coupled with a cron schedule for a more typical sysadmin way to do it. However, I believe fellow users here are far more experienced in this sort of scripting so I tend not to speak so much.
Pardon me for a bit of irrelevence to your question. However, I don't think monitoring the log file as a solution to handling "out of memory" is a good approach.
Just to share with you what happened recently with me. In fact I have a test virtual server running JBoss in it, the JBoss died on 24/1 and OutOfMemory was never observed until yesterday. This is because it was a test VM and nobody else would be accessing that unless I need to. The exception was logged but according to log it took just a few minutes afterwards that the JVM crashed to such a state that even the logging system failed. Luckily that was not a production machine.
Monitoring the log file, you cannot be polling it all the time. However, whenever a process gets the chance to poll and find that the exception was logged, the JVM has already been in a dead state and you'll still suffer downtime before the restart is complete - just that you know it sooner to avoid the kind of several days of non-discovery otherwise.
If your situation permits, a better approach is to intervene before the JVM gets out of memory, so that you can plan for a graceful restart ahead and avoid all those embarrassness. Today's JVM already have monitoring mechanism builtin (JMX) that you can easily monitor the memory consumption locally or remotely, unless you're still using old JVMs like 1.x that did not have this feature. So if you see the heap size is over a certain threshold say 70% then an email will be sent to you. Well, I believe this is exactly a reason I would invite others to consider migrating to a more recent JVM ......