Finding duplicate data in a file

A pogram named LOGGEDON returns an output of:

Ref_num IP Address Logged on User
12000 10.10.12.12 12-02-2002 11:00 john
12004 10.10.12.13 12-03-2002 14:00 mary
12012 10.10.12.14 12-03-2002 11:30 bob
12024 10.10.12.12 12-03-2002 09:00 john
12088 10.10.12.14 12-01-2002 21:00 bob

Another program REMUSER will terminate a session according to the "Ref_num"

i.e. REMUSER 12004 would kill mary's session

If you notice the same IP Address has has more than one Logged on session. If there is more than one occurance of the same "IP Address" I want to kill the oldest "Logged on" session.

i.e.

REMUSER 12000 <<< kills john's oldest session
REMUSER 12088 <<< kills bob's oldest session

  1. The script would call LOGGEDON and direct its ouput to a temp file.

  2. Each duplicate IP Address would be found in the temp file

  3. REMUSER would be called with the "Ref_num of the oldest duplicate entry.

Check out the man page on the sort command. You may want to sort first by the last field (to get all the same usernames together) and then use the -um option to sort again for the date/time field. All the oldest entries for each name should be the last (a loop could grap the last entry and send it to your program).