Hello all,
searching on a text file (log file) is quite simple:
grep -i texttosearch filename | grep something
What I'm trying to do is filter the result by TAG and remove the double entries.
Like if the log file contains the following text (fields are separated by commas):
20101024_201000:Counter,RESPONSE_FAIL,NODE,ApplicationAccessGroup.ServerGroup.Server.41700,1,47,0;
20101024_201000:Counter,RESPONSE_OK,NODE,ApplicationAccessGroup.Server.41880,1,15,0;
20101024_201000:Counter,RESPONSE_FAIL,TOTAL,Total,25459;
20101024_201000:Counter,RESPONSE_FAIL,TOTAL,Total,1;
20101025_215000:Counter,RESPONSE_FAIL,TOTAL,Total,15459;
Now, time to filtering
20101024_201000:Counter,RESPONSE_OK,NODE,ApplicationAccessGroup.Server.41880,1,15,0;
20101025_215000:Counter,RESPONSE_FAIL,TOTAL,Total,15459;
So group by TAG (the bold/red one) and show only the last one.....
Is it possible to do in a simple way? Or it's so hard to do?
Hope my goal is clear!