removing duplicates.

Hi All

In unix ,we have a file ,there we have to remove the duplicates by using one specific column.
Can any body tell me the command.

ex:
file1
id,name
1,ww
2,qwq
2,asas
3,asa
4,asas
4,asas

o/p:
1,ww
2,qwq
3,asa
4,asaa

awk -F, '{print $2}' file1 | sort | uniq >file2

two "tac":

tac file|awk -F"," '++a[$2]==1' |tac
1,ww
2,qwq
3,asa
4,asas

Thanks for ur suggestion ,here i have question..
after doing the above command we should get only one columnn in the file.but here i want all the columns.

in Solaris use nawk :-

awk -F","  '!_[$1]++' infile.txt > outfile.txt

$ cat file1
id,name
1,ww
2,qwq
2,asas
3,asa
4,asas
4,asas

$ sort -t, file1 -k 2,2 | uniq -u >file2

$ cat file2
3,asa
2,asas
id,name
2,qwq
1,ww

You can also use awk alone:

awk -F, '!_[$2]++' in >out
cat in
1,ww
2,qwq
2,asas
3,asa
4,asas
4,asas

cat out
1,ww
2,qwq
2,asas
3,asa

sort in | uniq -u

---------- Post updated at 12:45 AM ---------- Previous update was at 12:42 AM ----------

--- oops i didn't read correctly... forget my answer