Remove Similar Lines from a File

I have a log file "logreport" that contains several lines as seen below:

04:20:00 /usr/lib/snmp/snmpdx: [ID 702911 daemon.error] Agent snmpd appeared dead but responded to ping
06:38:08 /usr/lib/snmp/snmpdx: [ID 702911 daemon.error] Agent snmpd appeared dead but responded to ping
07:11:05 /usr/lib/snmp/snmpdx: [ID 702911 daemon.error] Agent snmpd appeared dead but responded to ping

I would like to edit the report to remove entries that report duplicate events (event being the portion highlighted in red). I have no knowledge of what the events will be or how long they are. I am trying to produce an output close to what is seen below:

04:20:00 /usr/lib/snmp/snmpdx: [ID 702911 daemon.error] Agent snmpd appeared dead but responded to ping
This Error was reproduced 2 times

Try and adapt the following awk program.
steve.awk :

{
   if (match($0, /\[/) == 0) {
      Lines[++LinesCount] = $0;
      LineIds[LinesCount] = ""
      next;
   }

   id  = substr($0, RSTART);

   if (++Ids[id] == 1) {
      Lines[++LinesCount] = $0;
      LineIds[LinesCount] = id;
   }
}

END {
   for (i=1; i<=LinesCount; i++) {
      print Lines;
      if (id = LineIds) {
         if (Ids[id] > 1) {
            print "This Error was reproduced", Ids[id], "times";
         }
      }
   }
}

Execute the awk program :

awk -f steve.awk logreport

Jean-Pierre.

You can use --

cat logfile | sort | uniq -c > newlogfile

Thanks
Namish

I ended up using the code:

cat logfile | sort | uniq -c -n6 >> logreport

For the uniq command the -c flag will print the number of occurences before each line while the -n6 flag will ignore the first 6 fields for comparison. The end result is exactly what I needed. Thank you for your help everyone.

cat file1|sort|uniq -c|cut -f2 >file2