and then have that cron'ed to run every hour.... The first sed is used because between the 1st and 9th of the month, there is an extra space in the date. Second sed put the hour in it's own column to be matched on the awk. Then end file just has the routernames sorted unique.
There has to be an easier/better way to go about this?
This just came to mind again when Shell Life posted this in another thread:
sed -n '/18:/,$ p' filename
and I thought that might be a good way to just search within the previous hour.
Shell Life,
Nice script. That might fit my needs a little better actually. That way I can just cron it to run on the hour and it will find everything for the previous (take out egrep and just grep on ${mPrevHH}.. I notice you use the typeset in a lot of your scripts.. I'll do some reading on that see what I can learn.
Thanks for the input. This forum has been very beneficial to me teaching me scripting... hopefully one day I can contribute as much as you guys do
Edit: Quick question:
What is this part doing?
mFirstPart='^... .. '
Looks like some sort of regexp matching from the start of the line?? Thanks for your help.
It is a regular expression as follows:
1) Begining of the line.
2) Three characters.
3) One space.
4) Two characters.
5) One space.
This is done to make sure the "egrep" is using the first date and not the second one:
Jun 18 14:17:56 routername 36806: Jun 18 17:53:01.088:
Jun 18 13:17:56 routername 36806: Jun 18 15:53:01.088:
Jun 18 12:17:56 routername 36806: Jun 18 17:53:01.088:
Jun 11 17:47:56 routername 36806: Jun 18 01:53:01.088:
Jun 11 17:47:56 routername 36806: Jun 18 13:53:01.088:
Jun 07 14:17:56 routername 36806: Jun 18 00:53:01.088:
As for the code, it can be improved to make sure it is using two digits for the hour:
That did it .. thanks. I had read that bash offered most everything that the other shells and then some in terms of scripting... I guess that isn't the case here, eh?