Calculate average of top n% of values - UNIX

Hey guys,

I have several huge tab delimited files which look like this:

a 1 20
a 3 15
a 5 10
b 2 15
b 6 10
c 3 23

what I am interested is to calculate the average of top n% of data in third column. So for example for this file the top 50% values are:

23
20

(Please note that it is not top 50% of the number of inputs, it's top 50% of values. Basically repetitions are ignored)
And then the average (the output) will be:

21.5

I can do it with making 2 temporary files for each input file which is dumber even than the most possible stupid way! :expressionless:

I appreciate if you have something neat and clean on your mind for this.
Thanks in advance!

gawk -v p=50 '{a[$3]}END{asorti(a,as);l=length(a);for(i=l;(i/l)*100>p;i--) print as}' myFile

Thanks for your reply vgersh99!

It has some problems
First of all it does not do the sorting properly. Here is what I got for top 0.1% for example:

258
2575
257
2553
255
251
25
...

apparently it sort them like text.

Also I would like to have the average of all these numbers (current output) as thefinal output.

I also tried to make it automatic for many files like below, it did not work!

for file in *.txt do; gawk -v p=50 '{a[$3]}END{asorti(a,as);l=length(a);for(i=l;(i/l)*100>p;i--) print as}' $file; done

Thanks 1000 times for your time.

What's your longest number?

Here is a update with assumption of 7 or less digits (adjust red value as you need):

gawk -v p=50 '{a[sprintf("%07d",$3)]}
  END{asorti(a,as);l=length(a);for(i=l;(i/l)*100>p;i--)
  t+=as; printf "%.2f\n", t/l/2}' infile

For multi files:

for file in *.txt
do
   gawk -v p=50 '
   {a[sprintf("%07d",$3)]}
   END{
      asorti(a,as)
      l=length(a)
      for(i=l;(i/l)*100>p;i--) t+=as
      printf "%s\t%.2f\n", FILENAME, t/l/2}' "$file"
done

What way have you tried?

Can you send the output from one command straight into the next, a bit like this?:-

cut -f3 -d " " input_file | sort -unr | ........

It would probably be more beneficial if it is in the tools that you are most comfortable with rather than a bespoke one-off that you dare not adjust.

What suits you?

Robin

I don't think I have more than 7 digits. I tried it with two following sample files:
file1.txt

chr2L	10	23
chr2L	20	20
chr2L	35	15
chr2L	36	10
chr3R	12	10
chrX	10	15

file2.txt

chr2L	10	230
chr2L	20	20
chr2L	35	1.5
chr2L	36	1000
chr3R	12	100
chr3R	20	300
chrX	10	15
chrX	26	1500

and this is what I got as output when p=50:

file1.txt 21.50
file2.txt 1515.00

I think something is wrong but I don't know what! For sure the average of top 50% (1515 for second file) cannot be more than the maximum(1500)!!

I also tried it with different percentages. (p=12.5,p=10,...) but it does not seem to work properly. my desired percentage is p=0.1 if that is important to know.

Thanks once again for helping me.

---------- Post updated at 12:18 PM ---------- Previous update was at 12:10 PM ----------

Dear rbatte1,

You are absolutely right with no doubt!
I am a student in "bioinformatics" and my knowledge in programming is sub zero which is a shame. you asked what I did. What I could think of was to extract the third column first and using 'pipe' sort it numerically and then again use pipe to have only unique values and then sort them again and count the numbers of values take 0.1% of them and another pipe and then the average function!!
you see how stupid one can be!!
But even for each step of this dumb way I have to Google and such simple script takes me a half a day or more to complete! and this is part of a huge analysis of-course...

Now you see why I decided to ask for help! :expressionless:

Kindest regards,
aman

We're always happy to have questions & I've learned lots from this site by asking what many consider as daft questions. Hopefully this helps you understand and become better, but it sometimes helps us who know a bit to have problems to solve. We all contribute and see different ways of doing things.

If you make an effort, folks here are happy to help - and I've learned lots of new things. Most people will have joined to ask a question in the first place, so you are most welcome.

Whatever suits you best is a good way to steer us. Some tools like awk and their variants can take years to get a good understanding on. I'm still struggling along trying to learn, so don't get disheartened.

If you have a go, then show what you've done when asking for help the collective group will no doubt help you out.

Regards,
Robin

strange, for your sample files AND the shell wrapper with gawk mentioned above, I get:

file1.txt        5.38
file2.txt        189.38

Could you post the output of cat -vet file2.txt using code tags, please!

cat -vet file2.txt
chr2L^I10^I230^M$
chr2L^I20^I20^M$
chr2L^I35^I1.5^M$
chr2L^I36^I1000^M$
chr3R^I12^I100^M$
chr3R^I20^I300^M$
chrX^I10^I15^M$
chrX^I26^I1500

I am so thankful for your valuable helps. The thing is that now I realized I want it even simpler and I was dumb enough not to know!!

What I want is basically sort the values in third column and take average of top 0.1% of them! (no matter if there are repetitions or not)

with my subzero knowledge I started like this:

awk '{print $3}' test.txt | sort -nk1

now I don't know how to get the average of top 0.1% values (the last 0.001 lines)

Bunch of thanks and regards! :slight_smile:

you have ^M-s in your file - probably from ftp-ing from the Windows box.
Get rid of ^M-s in BOTH files and re-run the script, e.g. tr -d '\015' < file2 >file2_new

I still get the same...

tr -d '\015' <file1.txt > file1_new.txt
tr -d '\015' <file2.txt > file2_new.txt
file1_new.txt	21.50
file1.txt	21.50
file2_new.txt	1515.00
file2.txt	1515.00

I'm sure that I do something stupid!!
when I think about it it's very simple but when I try to do it you saw how much I could! :expressionless:
and when I try to do what you suggested I'm not able to solve a tiny issue since it's beyond my knowledge.

Thanks for your time anyway.

Apologies there was a slight bug in there (I wasn't dividing by the number of values used).

I've tried to make it a little easier to understand and output the number of unique values found and what is summed, you can take out the red line if you don't want this extra info:

for file in *.txt
do
   gawk -v p=50 '
    {a[sprintf("%07d",$3)]}
    END{
      asorti(a,as)
      end=length(a)
      start=end-int(end*p/100)+1
      if(start>end) start=end
      printf("File has %d unique values, average from %d to %d\n", end, start, end)
      for(i=start;i<=end;i++) total=total + as
      printf "%s\t%.2f\n", FILENAME, total/(end-start+1)}' "$file"
done