Count number of unique values in each column of array

What is an efficient way of counting the number of unique values in a 400 column by 1000 row array and outputting the counts per column, assuming the unique values in the array are:

A, B, C, D

In other words the output should look like:

     Value    COL1    COL2    COL3
A    50    51    52
B    95    23    12
C    51    95    85
D    32    60    20

Thanks in advance

Here is an awk approach:-

awk '
        BEGIN {
                n = split ( "A B C D", T )
        }
        {
                for ( i = 1; i <= NF; i++ )
                        R[i FS $i] += 1
        }
        END {
                printf "VAL\t"
                for ( i = 1; i <= NF; i++ )
                        printf "COL%d\t", i
                printf "\n"

                for ( j = 1; j <= n; j++ )
                {
                        printf "%c\t", T[j]
                        for ( i = 1; i <= NF; i++ )
                                printf "%d\t", R[i FS T[j]]
                        printf "\n"
                }
        }
' file
1 Like

If you are looking for a report on all values that appear in data (not just A B C or D) you could also try the following:

awk '
{ 
   mc = NF > mc ? NF : mc
   for(i=NF; i; i--) {
      T[$i]
      C[i FS $i]++
   }
}
END {
  printf "Value"
  for(i=1; i<=mc;i++) printf "\tCOL%d",i

  for(v in T) {
     printf "\n%s", v
     for(i=1; i<=mc;i++) printf "\t%d",C[i FS v]
  }
  printf "\n"
}' infile
1 Like

Thank you it works except for certain columns. So starting at column 13 and every 13 columns thereafter it gives an incorrect value.

Here is the output for the 1st 26 columns. Columns 13 and 26 have an incorrect count of the 1/1 values. The rest looks good.

Value    1/1    0/0    0/1
COL1    4    61    18
COL2    6    63    14
COL3    2    59    22
COL4    3    64    16
COL5    2    60    21
COL6    2    61    20
COL7    2    64    17
COL8    0    60    23
COL9    2    56    25
COL10    2    66    15
COL11    2    63    18
COL12    1    62    20
COL13    53    63    15
COL14    2    54    26
COL15    1    63    18
COL16    2    66    15
COL17    4    65    16
COL18    2    63    16
COL19    6    59    20
COL20    2    55    22
COL21    0    63    18
COL22    6    67    16
COL23    4    60    17
COL24    3    57    22
COL25    3    55    25
COL26    53    62    18

---------- Post updated at 10:34 PM ---------- Previous update was at 10:32 PM ----------

Thanks Yoda, works except with the same problems as Chubler_XL's script below.

Given the code suggested by Chubler_XL and Yoda, it is hard to imagine that anything is different in the way counts are accumulated for column numbers that are multiples of 13.

Can you provide us with sample data that demonstrates the inaccurate counts that you have reported?

What operating system (including release number) are you using?

Which version of awk are you using?

1 Like

Sure, for Awk I have GNU Awk 4.1.3, and for OS I have Ubuntu 16.04.3 LTS, and I have attached the actual text file I used. Also, I stand corrected it seems that it is not multiples of 13 but many more columns at random that are off.

When I slightly modify the code suggested by Chubler_XL to be:

awk '
{ 
   mc = NF > mc ? NF : mc
   for(i=NF; i; i--) {
      T[$i]
      C[i FS $i]++
   }
}
END {
  printf "Value"
  for(i=1; i<=mc;i++) printf "\tCOL%d",i

  for(v in T) {
     printf "\n%s", v
     for(i=1; i<=mc;i++) printf "\t%d",C[i FS v]
  }
  printf "\n"
}' a2.txt

and store this in a file named Chubler_XL , make it executable and run the command:

./Chubler_XL > Chubler_XL.out

and I slightly modify the code suggested by Yoda to be:

awk '
        BEGIN {
                n = split ( "./. 0/0 0/1 1/1", T )
        }
        {
                for ( i = 1; i <= NF; i++ )
                        R[i FS $i] += 1
        }
        END {
                printf "VAL\t"
                for ( i = 1; i <= NF; i++ )
                        printf "COL%d\t", i
                printf "\n"

                for ( j = 1; j <= n; j++ )
                {
                        printf "%s\t", T[j]
                        for ( i = 1; i <= NF; i++ )
                                printf "%d\t", R[i FS T[j]]
                        printf "\n"
                }
        }
' a2.txt

and store this in a file named Yoda , make it executable and run the command:

./Yoda > Yoda.out

and I write the code:

awk -v line_count="$(wc -l < a2.txt)" '
function check() {
	printf("Checking fields 2 through %d in file: %s\n", NF, f)
	for(i = 2; i <= NF; i++)
		if(c != line_count)
			printf("file %s: field %d count %d\n", f, i, c)
	split("", c)
}
FNR == 1 {
	line_count += 0
	if(f == "")
		printf("Evaluating output produced from %d lines in a2.txt\n",
		    line_count)
	else
		check()
	f = FILENAME
	next
}
{	for(i = 2; i <= NF; i++)
		c += $i
}
END {	check()
}' *.out

and store that in a file named counter , make it executable, and run it, I get the output:

Evaluating output produced from 83 lines in a2.txt
Checking fields 2 through 305 in file: Chubler_XL.out
Checking fields 2 through 305 in file: Yoda.out

which shows that the sums of the values for each of the 304 fields does indeed equal the number of lines found in the file you attached in post #6.

I see no indication that either of these suggestions is producing results that are incorrect although neither of them produce output that is at all close to the output you showed us in post #4. I do note that the output you showed us in post #4 only shows output for the three values "0/0", "0/1", and "1/1"; but the data in a2.txt also includes some entries with the value "./." which is included in the output produced by the code Chubler_XL suggested and in the output produced by the code Yoda suggested (after changing it to look for those four values instead of the values, "A", "B", "C", and "D" that you said were included as values in your statements in post #1.

If you'd like to show us the code you used to produce the output for the 1st 26 columns you showed us in post #4, maybe we can help you explain why that code failed to correctly interpret the output produced by Chubler_XL's code or Yoda's code.

1 Like

Note that the output produced by the code suggested by Yoda and Chubler_XL in posts #2 and #3 is in the format specified in post #1. To get expanded output (with totals for each column and for each row as a sanity check (with *** at the end of any line where the checks fail) in a format closer to that shown in post #4, one could try the following:

awk '
{	for(i = 1; i <= NF; i++) {
		c[i, $i]++
		v[$i]
	}
}
END {	printf("Value")
	for(i in v)
		printf("\t%s", i)
	printf("\ttotal\n")
	for(i = 1; i <= NF; i++) {
		printf("COL%d", i)
		t = 0
		for(j in v) {
			printf("\t%d", c[i, j])
			t += c[i, j]
			C[j] += c[i, j]
		}
		printf("\t%d%s\n", t, (t == NR) ? "" : "***")
	}
	printf("Total")
	t = 0
	for(i in v) {
		printf("\t%d", C)
		t += C
	}
	printf("\t%d%s\n", t, (t == NR * NF) ? "" : "***")
}' a2.txt

which with the sample input file attached to post #6 produces the output:

Value	./.	0/0	0/1	1/1	total
COL1	0	61	18	4	83
COL2	0	63	14	6	83
COL3	0	59	22	2	83
COL4	0	64	16	3	83
COL5	0	60	21	2	83
COL6	0	61	20	2	83
COL7	0	64	17	2	83
COL8	0	60	23	0	83
COL9	0	56	25	2	83
COL10	0	66	15	2	83
COL11	0	63	18	2	83
COL12	0	62	20	1	83
COL13	0	63	15	5	83
COL14	0	54	26	3	83
COL15	0	63	18	2	83
COL16	1	66	15	1	83
COL17	0	65	16	2	83
COL18	0	63	16	4	83
COL19	2	59	20	2	83
COL20	0	55	22	6	83
COL21	0	63	18	2	83
COL22	0	67	16	0	83
COL23	0	60	17	6	83
COL24	0	57	22	4	83
COL25	0	55	25	3	83
COL26	0	62	18	3	83
COL27	0	55	23	5	83
COL28	0	57	23	3	83
COL29	1	49	28	5	83
COL30	0	52	28	3	83
COL31	0	61	21	1	83
COL32	0	59	22	2	83
COL33	0	56	23	4	83
COL34	0	55	24	4	83
COL35	0	63	19	1	83
COL36	0	57	23	3	83
COL37	0	62	18	3	83
COL38	0	61	21	1	83
COL39	0	59	21	3	83
COL40	0	56	21	6	83
COL41	0	56	24	3	83
COL42	0	60	21	2	83
COL43	0	60	19	4	83
COL44	0	56	23	4	83
COL45	0	63	20	0	83
COL46	0	61	20	2	83
COL47	0	55	27	1	83
COL48	0	60	17	6	83
COL49	0	59	22	2	83
COL50	0	63	16	4	83
COL51	0	64	15	4	83
COL52	0	61	17	5	83
COL53	0	63	17	3	83
COL54	0	61	16	6	83
COL55	1	64	17	1	83
COL56	0	59	20	4	83
COL57	0	58	16	9	83
COL58	0	55	25	3	83
COL59	0	58	18	7	83
COL60	0	57	19	7	83
COL61	0	57	21	5	83
COL62	0	52	23	8	83
COL63	0	55	22	6	83
COL64	0	58	19	6	83
COL65	0	57	19	7	83
COL66	0	59	21	3	83
COL67	0	63	18	2	83
COL68	0	57	22	4	83
COL69	0	55	26	2	83
COL70	0	62	16	5	83
COL71	1	47	29	6	83
COL72	1	58	16	8	83
COL73	0	60	17	6	83
COL74	0	61	19	3	83
COL75	0	60	21	2	83
COL76	0	53	28	2	83
COL77	0	54	22	7	83
COL78	0	57	22	4	83
COL79	0	57	20	6	83
COL80	0	58	21	4	83
COL81	0	54	20	9	83
COL82	0	57	19	7	83
COL83	0	52	21	10	83
COL84	0	61	18	4	83
COL85	0	57	24	2	83
COL86	0	56	22	5	83
COL87	0	54	21	8	83
COL88	0	58	18	7	83
COL89	0	56	22	5	83
COL90	0	60	18	5	83
COL91	0	64	18	1	83
COL92	0	61	21	1	83
COL93	0	58	18	7	83
COL94	0	58	23	2	83
COL95	0	62	20	1	83
COL96	0	63	18	2	83
COL97	0	60	20	3	83
COL98	0	65	18	0	83
COL99	0	57	23	3	83
COL100	0	61	19	3	83
COL101	0	60	19	4	83
COL102	0	55	27	1	83
COL103	0	64	17	2	83
COL104	1	63	17	2	83
COL105	0	61	22	0	83
COL106	0	54	29	0	83
COL107	0	60	22	1	83
COL108	0	61	20	2	83
COL109	0	61	19	3	83
COL110	0	57	23	3	83
COL111	0	58	19	6	83
COL112	0	61	19	3	83
COL113	0	61	20	2	83
COL114	0	58	21	4	83
COL115	0	61	19	3	83
COL116	0	64	17	2	83
COL117	0	60	19	4	83
COL118	0	62	20	1	83
COL119	0	63	18	2	83
COL120	0	60	21	2	83
COL121	0	63	18	2	83
COL122	0	67	15	1	83
COL123	0	61	16	6	83
COL124	0	67	12	4	83
COL125	0	58	21	4	83
COL126	0	58	23	2	83
COL127	0	60	21	2	83
COL128	0	62	20	1	83
COL129	0	60	21	2	83
COL130	0	63	19	1	83
COL131	0	65	13	5	83
COL132	0	62	19	2	83
COL133	0	57	23	3	83
COL134	0	60	22	1	83
COL135	0	68	13	2	83
COL136	0	62	17	4	83
COL137	0	58	20	5	83
COL138	0	65	12	6	83
COL139	0	54	27	2	83
COL140	0	69	12	2	83
COL141	0	58	23	2	83
COL142	0	62	20	1	83
COL143	0	67	14	2	83
COL144	0	62	19	2	83
COL145	0	59	19	5	83
COL146	0	63	17	3	83
COL147	0	60	22	1	83
COL148	0	65	15	3	83
COL149	1	54	25	3	83
COL150	2	55	20	6	83
COL151	0	71	10	2	83
COL152	2	61	16	4	83
COL153	0	64	19	0	83
COL154	0	60	18	5	83
COL155	0	61	19	3	83
COL156	0	63	15	5	83
COL157	1	69	13	0	83
COL158	1	58	20	4	83
COL159	1	60	19	3	83
COL160	1	57	22	3	83
COL161	0	63	20	0	83
COL162	1	56	22	4	83
COL163	0	56	23	4	83
COL164	1	56	21	5	83
COL165	0	55	27	1	83
COL166	0	63	15	5	83
COL167	0	56	23	4	83
COL168	0	62	18	3	83
COL169	0	54	25	4	83
COL170	0	56	22	5	83
COL171	0	63	16	4	83
COL172	0	53	23	7	83
COL173	0	65	16	2	83
COL174	0	60	20	3	83
COL175	0	54	21	8	83
COL176	0	65	13	5	83
COL177	0	66	14	3	83
COL178	0	59	21	3	83
COL179	0	83	0	0	83
COL180	0	53	27	3	83
COL181	0	64	16	3	83
COL182	0	56	23	4	83
COL183	0	56	22	5	83
COL184	0	55	25	3	83
COL185	0	64	19	0	83
COL186	0	64	17	2	83
COL187	0	53	26	4	83
COL188	0	61	20	2	83
COL189	0	59	23	1	83
COL190	0	64	16	3	83
COL191	0	55	22	6	83
COL192	0	68	14	1	83
COL193	0	59	20	4	83
COL194	0	64	16	3	83
COL195	0	64	16	3	83
COL196	0	69	13	1	83
COL197	0	64	16	3	83
COL198	0	58	23	2	83
COL199	0	61	20	2	83
COL200	0	55	24	4	83
COL201	0	59	20	4	83
COL202	0	60	21	2	83
COL203	0	61	20	2	83
COL204	0	58	24	1	83
COL205	0	59	18	6	83
COL206	0	59	22	2	83
COL207	0	58	22	3	83
COL208	0	56	26	1	83
COL209	0	58	21	4	83
COL210	0	66	16	1	83
COL211	0	62	20	1	83
COL212	0	61	20	2	83
COL213	0	60	21	2	83
COL214	0	62	20	1	83
COL215	0	58	22	3	83
COL216	0	62	21	0	83
COL217	0	60	19	4	83
COL218	0	62	20	1	83
COL219	0	60	20	3	83
COL220	0	62	20	1	83
COL221	0	58	23	2	83
COL222	0	55	28	0	83
COL223	0	61	20	2	83
COL224	0	64	17	2	83
COL225	0	55	23	5	83
COL226	0	64	16	3	83
COL227	0	60	16	7	83
COL228	0	59	20	4	83
COL229	0	59	22	2	83
COL230	0	51	32	0	83
COL231	0	60	16	7	83
COL232	0	65	15	3	83
COL233	0	61	21	1	83
COL234	0	66	14	3	83
COL235	0	59	22	2	83
COL236	0	67	13	3	83
COL237	0	58	24	1	83
COL238	0	59	20	4	83
COL239	0	60	22	1	83
COL240	1	64	16	2	83
COL241	0	65	16	2	83
COL242	1	53	27	2	83
COL243	0	65	17	1	83
COL244	0	58	24	1	83
COL245	0	60	18	5	83
COL246	0	61	18	4	83
COL247	0	63	16	4	83
COL248	0	53	26	4	83
COL249	1	59	20	3	83
COL250	0	58	22	3	83
COL251	0	62	20	1	83
COL252	0	62	19	2	83
COL253	0	52	25	6	83
COL254	0	54	25	4	83
COL255	0	68	9	6	83
COL256	0	56	21	6	83
COL257	0	56	22	5	83
COL258	0	63	18	2	83
COL259	0	54	29	0	83
COL260	1	53	16	13	83
COL261	0	55	23	5	83
COL262	1	52	24	6	83
COL263	0	55	20	8	83
COL264	0	48	26	9	83
COL265	0	59	17	7	83
COL266	0	59	18	6	83
COL267	0	56	24	3	83
COL268	0	53	21	9	83
COL269	0	62	17	4	83
COL270	0	55	23	5	83
COL271	0	55	23	5	83
COL272	0	58	20	5	83
COL273	0	56	22	5	83
COL274	0	54	25	4	83
COL275	0	56	22	5	83
COL276	0	50	28	5	83
COL277	2	50	28	3	83
COL278	0	58	20	5	83
COL279	0	59	19	5	83
COL280	0	56	17	10	83
COL281	0	64	13	6	83
COL282	0	60	16	7	83
COL283	0	60	17	6	83
COL284	0	61	14	8	83
COL285	0	54	24	5	83
COL286	0	55	27	1	83
COL287	0	57	21	5	83
COL288	0	56	26	1	83
COL289	0	57	21	5	83
COL290	0	55	24	4	83
COL291	1	52	29	1	83
COL292	1	59	21	2	83
COL293	0	52	30	1	83
COL294	0	60	20	3	83
COL295	0	55	24	4	83
COL296	2	63	16	2	83
COL297	0	57	23	3	83
COL298	0	57	23	3	83
COL299	0	62	20	1	83
COL300	0	57	20	6	83
COL301	0	64	16	3	83
COL302	0	55	23	5	83
COL303	0	64	17	2	83
COL304	0	0	0	83	83
Total	30	18008	6078	1116	25232

The order of the middle columns of the above output may vary with different implementations of awk , but contents of the columns should be consistent.

1 Like

Double post

Don, thanks for your time and for Chubler XL's and Yoda's time, great work and impressive expertise! it works when I copy the code into a file and make it executable, but I do have a dumb question; why does it give weird results when the code is simply copy pasted into terminal (guessing that some parts of the code are not executed)

Copying text containing <tab> characters into a bash shell with filename completion enabled can cause all sorts of strange things to happen. It may fail for other reasons as well. (I usually use ksh instead of bash .)

Have you tried running the code I suggested in post #8 which seems to provide output in a format closer to what you now seem to want? The code Yoda and Chubler_XL suggested in posts #2 and post #3 is in the format you specified in post #1 (which is totally different from your latest output format).

1 Like

Again thanks for all you do to help out all the folks out there like myself. Great forum BTW.

I am sure many people in addition to geneticists will find this code useful.

Yes, I like your latest code, however, I need the actual sample names printed instead of the header consisting of COL1, COL2, ....

To do this I am grabbing the header with sample names (columns 10 thru N) from a different file. I use:

awk 'FNR==28 {for(i=9;i<=NF;i++){printf "%s ", $i}; printf "\n"}' a.txt

in the shell right before your code to print a header containing actual sample names which are contained in a.txt. So this is the last half of my code:

# Strip columns 1-9 and write to a2.txt
awk '{for(i=10;i<=NF;i++){printf "%s ", $i}; printf "\n"}' a1.txt > a2.txt
# PRINT HEADER
awk 'FNR==28 {for(i=9;i<=NF;i++){printf "%s ", $i}; printf "\n"}' a.txt
# Print count of 0/0 0/1 1/1 ./. for each sample in the run
awk '
{ 
   mc = NF > mc ? NF : mc
   for(i=NF; i; i--) {
      T[$i]
      C[i FS $i]++
   }
}
END {
  for(v in T) {
     printf "\n%s", v
     for(i=1; i<=mc;i++) printf "\t%d",C[i FS v]
  }
  printf "\n"
}' a2.txt

If I use your latest code which transposes the output, I would have to modify my code (highlighted above) to transpose the header containing the sample names also. Any ideas.

I am glad that you find this forum useful. We all hope to help each other learn how to better use the tools that are available on our systems.

Can we see the first half of your code, samples of your a.txt and a1.txt , and the output that you are hoping to produce from those input files?

Note that you say you're printing headers from fields 10 through N from line 28th line of a.txt , but your code is printing headers starting in field 9 (not 10). What is the field separator in that file?

Note also that the three awk commands you have shown us from the last half of your code could more efficiently be done with a single awk command without needing to take filesystem space for the file a2.txt (unless there is some other reason why you need to have that as a separate file).

2 Likes

Thanks Don. Here is the my whole code:

# Identify target derived alleles to the exculsion of outgroups
awk -F "\t" '(NR>28) { if(($313 == "0/0") && ($314 != "0/0") && ($315 != "0/0") && ($316 != "0/0") && ($317 != "0/0") && ($318 != "0/0") && ($319 != "0/0")) {print $0} }' a.txt > a1.txt
#
# Strip columns 1-9 and write to a2.txt
awk '{for(i=10;i<=NF;i++){printf "%s ", $i}; printf "\n"}' a1.vcf > a2.txt
# PRINT HEADER
awk 'FNR==28 {for(i=9;i<=NF;i++){printf "%s ", $i}; printf "\n"}' a.txt
# Print count of 0/0 0/1 1/1 ./. for each sample in the run
awk '
{ 
   mc = NF > mc ? NF : mc
   for(i=NF; i; i--) {
      T[$i]
      C[i FS $i]++
   }
}
END {
  for(v in T) {
     printf "\n%s", v
     for(i=1; i<=mc;i++) printf "\t%d",C[i FS v]
  }
  printf "\n"
}' a2.txt

In the 1st part, the input data a.txt (attached file. I only copied the 1st 100 rows to include the header which has the sample names) is queried. Columns 1-9 do not contain relevant information. Columns 313-319 contain the target samples against which all the test samples are compared ( columns 10-312).

The rows that survive the comparison operation are written to a1.txt ( columns 1-9 don't contain relevant information).

Next the header containing the sample names is extracted from a.txt, and your code is executed for counting the number of unique values.

Next I manually add the values in the 0/1 and 1/1 columns, and create a totals column. I then sort the total column from high to low. The sample with the highest total indicates the most similarity to target sample (column 313).

I like the way you transposed the result and would like to also transpose the header with sample names. So instead of COL1. COL2, ..., I would like the sample names from row 28 ( columns 10-319), such as shown below.

FORMAT    1/1    0/0    0/1    TOTAL 0/0 & 0/1
.Kurd_C3_ID001    78    183    201    384
Balochi_HGDP00052    86    175    201    376
Balochi_HGDP00054    71    166    225    391
Balochi_HGDP00056    71    158    233    391
Balochi_HGDP00058    90    168    204    372
Balochi_HGDP00062    91    148    223    371
Balochi_HGDP00064    85    183    194    377
Balochi_HGDP00066    79    185    198    383
Balochi_HGDP00068    95    163    202    365
Balochi_HGDP00072    75    168    217    385
Balochi_HGDP00074    80    198    183    381
Balochi_HGDP00078    89    171    199    370
Balochi_HGDP00080    88    149    222    371
Balochi_HGDP00082    85    179    195    374
Balochi_HGDP00086    102    162    198    360
Balochi_HGDP00088    89    175    194    369
Balochi_HGDP00090    87    177    197    374
Balochi_HGDP00092    87    191    184    375
Balochi_HGDP00096    87    166    207    373
Balochi_HGDP00098    95    190    175    365
GujaratiD_NA20847    74    168    220    388
GujaratiD_NA20899    86    183    193    376

You have given us a 57.9KB a.txt and a 575.5KB a1.txt (which is created by your script from a.txt and should, therefore, be smaller than a.txt but is instead almost 10 times larger). Your script also creates a2.txt from a1.vcf . But, you haven't shown us what the contents of a1.vcf look like.

Please show us:

  1. the a1.txt that should be created from the sample a.txt you provided in post #14,
  2. a sample a.vcf file and a description of its contents (explaining what the field separator is in this file, what fields are used from which lines), and
  3. the exact output you hope to produce from those sample a.txt and a.vcf files.
1 Like

Hi Don, a1.vcf is a typo. It should be a1.txt. For some reason, I don't have an edit button for post 14.

I clipped off the bottom of a.txt because the file was a large 86M file. The bottom part is not necessary because it is all a bunch of 0/0 0/1 1/1 and ./.. and a repeat of the preceding rows I just wanted to show the header part plus some of the data (0/0 0/1 1/1 ./.)

The last portion of post 14 shows the desired output, with a column for the sample names ( row 28 in a.txt), the counts of values for each sample; 0/0, 0/1, 1/1, ./. (from a2.txt), a column for the SUM of 0/0 and 0/1 values. A sorting from high to low by column containing the SUM of 0/0 and 0/1.

I note that this output doesn't include any output for the fields that have the value ./. . Do you only want to display data in your output for the 1/1 , 0/0 , and 0/1 value counts?

Does the output order matter for the middle three columns?

You also said that your output should be sorted in decreasing order on the values in the last column, but your sample output appears to be unsorted???

Are columns 313-319 supposed to be counted and printed along with the test samples, or are just columns 10-312 supposed to be counted and printed?