Getting Unique values in a file

Hi,

I have a file like this:

Some_String_Here 123 123 123 321 321 321 3432 3221 557 886 321 321

I would like to find only the unique values in the files and get the following output:

Some_String_Here 123 321 3432 3221 557 886

I am trying to get this done using awk. Can someone please point me towards the right direction?

Edit: Ahaa... I figured out one solution:

{
        for(i=1;i<=NF;i++) {
                uni[$i]++;
        }
        for(i in uni) {
                print uni, i;
        }

}

Posting it so that someone else might find use of it...

Printing only the unique values

{
        for(i=1;i<=NF;i++) {
                uni[$i]++;
        }
        
        for(i in uni) {
                if ( uni == 1 ) {
                 print i;
                }
        }
}

Ah... thanks for the pointer :slight_smile: But again, if I am printing out just the keys, wouldn't that be same as your solution? Just for clarification though...

Or:

$ print Some_String_Here 123 123 123 321 321 321 3432 3221 557 886 321 321|
awk 'END{printf "\n"}!_[$1]++' ORS=" " RS=" " 
Some_String_Here 123 321 3432 3221 557 886 

With Perl:

$ print Some_String_Here 123 123 123 321 321 321 3432 3221 557 886 321 321|
perl -lane'print join" ",grep!$_{$_}++,@F'
Some_String_Here 123 321 3432 3221 557 886

Thank You so much... :slight_smile:

nawk '{
for(i=1;i<=NF;i++)
	arr[$i]=1
}
END{
for(i in arr)
	printf("%s ",i)
print ""
}' filename