we have a an out file that is updated from time to time ( every x seconds )... can we make a script to check if the file contains a spesific patteren.... if it does, we need to make an action like running another script.....
is this possible without makin cronjob to check wiehther the file contains the pattern or not ? I mean when the input file recieves "?????" do action.
I know grep should do it.... but if I want to check the pattern every five seconds... I need to make a cronjob to run the grep command.... right ?
what Im asking is that if I can check it automatically once the file contains this pattern with out any cronjob...
once the file has the patteren..... do something...
I cant use cronjob becasue it's not allowed on our server
Honestly... this is what we are trying not to do.... we cant keep checking the file because this will take a lot of time.... we need somekind of alert when it happens... what do you suggest... ?
Truly, the problem in statement is "check for every 'n' units of time". If it has to be checked for 'n' units of time then for the each slice of time the file has to be definitely opened, parsed through, validated or checked against and then closed again.
This has to be definitely done by a process. Using crond for that is ruled out ( as said by you ). So, another process that does a minimal of crond ( a real minimal ) is needed.
Whats wrong with the suggestion I had posted ? For every 'n' units of time, the process is going to wake up and going to do its job; rest of the time it would be in sleep mode so its not blocking the run queue of the runnables nor using up a considerable time slice in doing not-so useful while (1) or something like that.
Any, event tracker or event notifier has to work with the resource ( open, read, close ) and then inform the process blocking on the event notifiers/trackers.
Or, did you mean something else ?
Experts here might post different and better solutions
If you grep the whole file every time, you will find previous occurrences. If it is a huge logfile, for example, grepping thru the whole thing could take long than five seconds. It sounds exactly like monitoring a log file to me. Again, this is why we have tail -f.
Do you have anyway to get the process to do a log rotate - like signalling the process with a SIGHUP? This would make a full file grep feasible.
The problem is that Im not expert in UNIX... what I want is to trigger the script or alert or anything once the input file contains that patteren.. I dont want to make a cron job to check the file... I want this pattern to be like a trigger....
That again boils down to reading the file "to know whether the pattern has occurred or not" which has to be done by some process.
How would you expect to automatically be identified about a pattern in a file without actually reading that ? This is as good as - not reading the file and arriving at an answer - "no such pattern in the file".
Yes, thats right. But in the most of the cases the systems are loosely coupled by design such that each of the unit performs their job so seldom you could control the functionality or add functionality to process that is creating the log messages.
For example:
Its not obvious that I request DW team to WebServer team to redirect all <example_logs> to be suppressed or erase. It is upto the reporting team to filter that out from the apache/website logs to not use them.
That is the reason, I didnt give out that possibility.
In any case, I don't mean to deny what you had said. I just assumed ( over - assumed ) thats not going to be the case.
I think it needs to be said again: tail -f sounds like the ticket. You can combine the tail -f and the grep in a simple Perl script. Whatever the implementation, the idea is to keep the file open, and attempt to read another line, say, once per second, or once every five seconds. If the read fails, it means no data has been appended to the file. If it succeeds, there is new data; read and examine, and if it's a hit, take some action (terminate? send another process a signal? send an email?)