Removing dupicate lines in the file ..(they are not continuous)

I have duplicates records in a file, but they are not consecutive. I want to remove the duplicates , using script.

Can some one help me in writing a ksh script to implement this task.

Ex file is like below.

1234

5689

4556

1234

4444

5555

6889

5689

7898

1234

from the above file i want to get rid of

duplicates of 1234 and 5689

thanks

srini

awk '!x[$0]++' file.old >file.new

or

sort -u -o file.new file.old
nawk '!a[$0]' file.txt

to eliminate all the duplicates

sort <filename> | uniq -u

to supress only the duplicates

sort <filename> | uniq 

Thank you everyone, it works fine ,

But I have some other problem now , The result file is coming as

Sep 28 11:09:33> Error on trans: 2
Sep 28 12:10:42> Error on trans: 1
Sep 28 12:10:43> Error on trans: 1
Sep 28 12:14:43> Error on trans: 1
Sep 28 12:14:44> Error on trans: 1

I want the output to be as follows

Sep 28 11:09:33> Error on trans: 2
Sep 28 12:10:42> Error on trans: 1

Can some one help me in doing this too.

thanks
Srinii

nawk '!a[$NF]++' file.txt

hi vgersh99,

Thanks for your help,

Can you please explan me what actually is happening when we execute the above command ??

i.e nawk '!a[$NF]++' file.txt

Thanks
Srini

  1. What's unique for all your records/lines?
  2. Where does this unique field occur in the record/line?