Awk to print distinct col values

Hi Guys...
I am newbie to awk and would like a solution to probably one of the simple practical questions.

I have a test file that goes as:

1,2,3,4,5,6
7,2,3,8,7,6
9,3,5,6,7,3
8,3,1,1,1,1
4,4,2,2,2,2

I would like to know how AWK can get me the distinct values say for eg: on col2 and figure out that the distinct values are only 2,3,4.
I need to check my actual realtime medical file with the distinct service dates from around a relatively big 200,000 records.

And one more question is...how can I use awk to print out records which dont meet a specific criteria...like...
Eg: I want to see only those records where Distinct Col2 values are less than 10 and see the actual distinct values to figure out why they are < 10

I know I can always go for some fancy ETLs to achieve complex requirements(ofcoz this requirement is not complex anyway) and play around with the data but I wanna use the power of awk/sed to accomplish the tasks.

Help is highly appreciated.
Thank you very much
-Anduzzi

Cud someone help me on this pls?...

Thanks !!!

Using just shell script,

cut -d , -f2 file | sort | uniq

The sort is a prerequisite for uniq which is kind of unfortunate if there is a lot of data.

Using awk,

awk -F , '{ a[$2]++ } END { for (b in a) { print b } }' file

The array a counts the number of occurrences of each distinct value in the second field. We don't use the actual count of occurrences, just the keys (distinct values in the second field) in the final print, but that's obviously easy to change if you want to see the counts, too.

The array could collect something else than counts; for example, a[$2]=$0 would remember the latest line with a particular value in field $2 for each distinct value in field $2. Collecting more complex data such as all lines with a particular value is doable, but slightly more complex -- you could append to the existing data. But at that point, perhaps just collecting the keys you want, and then doing another round to extract only those records would be more efficient if there is a lot of data.

awk:

col=$1
nawk -v col="$col" 'BEGIN{FS=","}
{
	arr[$col]++
}
END{
print "Column "col" has distinct value:"
for (i in arr)
print i
}' file

perl:

$col=shift;
open(FH,"<file");
while(<FH>){
	@arr=split(",",$_);
	$hash{$arr[$col-1]}++;
}
close(FH);
print "Colum $col has distinct value:\n";
for $key (keys %hash){
	print $key,"\n";
}

The Perl can be even more succinctly expressed as a one-liner.

perl -laF, -ne '{ $a{$F[1]}++ } END { print for keys %a }' file

The OP was specifically asking for an awk solution, though.

Hi Guys...
Thank you(era and cherry) very much for the solutions...I could use the array solution for my requirement.

Well, how abt incorporating a specific condition in AWK to retrieve such records which DONT meet the criteria as mentioned in my original post?...any thoughts ?...

-Anduzzi

Print lines where column 2 < 10

awk -F, '$2<10' file

Conversely...

awk -F, '!($2<10)' file

Hi...
Well, once after the condition is met, is this the best way to print specific cols like $1 or $4 or whatever....

awk -F, '!($2<10)' unique | awk -F, '{print $1,$2}'

-Anduzzi

You can get that by lightly tweaking era's awk solution.

awk -F , '{ a[$2] = $1 } END { for (b in a) { if (b+0 < 10) print a, b } }' file

Hi Shamrock,
I am lilttle confused with ur solution.
Could you please explain what exactly your one-liner does?...
Also, how would you customize your one-liner to print multiple desired cols like $1, $10,$100 once the condition is met.

-Anduzzi.

It stores unique values in col2 alongwith its counterparts from col1

a[$2] = $1

After going through the entire file it prints out only those records where col2 < 10

if (b+0 < 10) print a, b

"Store first Print later" to print mutliple desired cols like $1, $10, $100 once the condition is met append them to the assignment list.

a[$2] = $1 FS $10 FS $100

Shamrock,
Thank you very much for the explanation.
Got one more question...
How do we replace a NULL with a SPACE using awk?...
In other words, how does awk interpret Nulls and Spaces?...wondering if it can do this replacement in hexadecimal(%x) mode...pls suggest !!!

Thanks !!!

How about some sample data input and output...that would help a great deal.

Hi,
I was trying to help my mate on this issue and unfortunately, he is off today and I dont have the test data corresponding to this..
I can provide the data next week.

Thanks for a quick turn around though.
-Anduzzi

RMZVIEW tester.txt

/* Header*/
00000000 ...........ICX080131080229.

Hexa : 000000000004453333333333330
000000000009380801310802290

00000301 80169294950 295334PBA
Hexa : 333333333332223333335442222
801692949500002953340210000

00000602 73329204490 000000000
Hexa : 333333333332223333333332222
733292044900000000000000000

/* Trailer*/
00003311 ............0000000004953+000000443502058

                         11111111111133333333333332333333333333333
                         AAAAAAAAAAAA0000000004953B000000443502058

I just want to replace the hexa 00(Null) in Header and 1A(Substitute or Ctrl-Z) in Trailer with a SPACE(20) and make sure that we preserve the fixed length.

Thanks !!!

Another question as I'm still at a loss of what data you want replaced? Highlight the data in the input that needs to be replaced while preserving the field length and formatting...as I see no 1A in the trailer.

Ohok....sorry about that.

/* Header*/
...........ICX080131080229.

Hexa : 000000000004453333333333330
000000000009380801310802290

00000301 80169294950 295334PBA
Hexa : 333333333332223333335442222
801692949500002953340210000

00000602 73329204490 000000000
Hexa : 333333333332223333333332222
733292044900000000000000000

/* Trailer*/
............0000000004953+000000443502058

11111111111133333333333332333333333333333
AAAAAAAAAAAA0000000004953B00000044350205

Please read the hexa codes(Top->bottom as my tool prints this way) as 1A highlighted in Red: 1A -> Substitute ASCI Char.
and similarly 00 in header : 00-> Null

hope its clear.

Thanks !!!

So you want the following replacements.

00000000 ...........ICX080131080229
WITH
...........ICX080131080229

and

00003311 ............0000000004953+000000443502058
WITH
............0000000004953+000000443502058

Not eactly.
I want something like:

...........ICX080131080229

WITH
ICX080131080229

............0000000004953+000000443502058

WITH

        0000000004953\+000000443502058

As u can observe, the Dots(which are actually NULL(Hex:00) and Hex:1A respectievly b4) should be replaced with SPACES(Hex:20)

Thanks !!!

To replace 1A in trailer with space...

awk '{gsub("\032"," ");print $0}' file

To replace NULL in header with space...

awk '{gsub("\000"," ");print $0}' file

You can combine the two awk's into one...

awk '{gsub("\000"," ");gsub("\032"," ");print $0}' file

To see the non-printing characters embedded in your data pass it through cat -vet and post its output here.

cat -vet file