Removing duplicates from new file

i hav two files like

i want to remove/delete all the duplicate lines in file2 which are viz unix,unix2,unix3.I have tried previous post also,but in that complete line must be similar.In this case i have to verify first column only regardless what is the content in succeeding columns.

Any attempt from your side?

i have tried this one from previous post

egrep -v $(cat file2.csv | tr '\n' '|' | sed 's/.$//') file1.csv

but its not working,as i want to remove all the duplicate records from file 2 considering column 1 as an unique

Untested, but this seems pretty straight forward:

awk -F, '
FNR == NR {
	d[$1]
	next
}
!($1 in d)' file1.csv file2.csv

As always, if you want to try this on a Solaris/SunOS system, change awk to /usr/xpg4/bin/awk or nawk .