I have several huge tab delimited files which look like this:
a 1 20
a 3 15
a 5 10
b 2 15
b 6 10
c 3 23
what I am interested is to calculate the average of top n% of data in third column. So for example for this file the top 50% values are:
23
20
(Please note that it is not top 50% of the number of inputs, it's top 50% of values. Basically repetitions are ignored)
And then the average (the output) will be:
21.5
I can do it with making 2 temporary files for each input file which is dumber even than the most possible stupid way!
I appreciate if you have something neat and clean on your mind for this.
Thanks in advance!
I think something is wrong but I don't know what! For sure the average of top 50% (1515 for second file) cannot be more than the maximum(1500)!!
I also tried it with different percentages. (p=12.5,p=10,...) but it does not seem to work properly. my desired percentage is p=0.1 if that is important to know.
Thanks once again for helping me.
---------- Post updated at 12:18 PM ---------- Previous update was at 12:10 PM ----------
Dear rbatte1,
You are absolutely right with no doubt!
I am a student in "bioinformatics" and my knowledge in programming is sub zero which is a shame. you asked what I did. What I could think of was to extract the third column first and using 'pipe' sort it numerically and then again use pipe to have only unique values and then sort them again and count the numbers of values take 0.1% of them and another pipe and then the average function!!
you see how stupid one can be!!
But even for each step of this dumb way I have to Google and such simple script takes me a half a day or more to complete! and this is part of a huge analysis of-course...
We're always happy to have questions & I've learned lots from this site by asking what many consider as daft questions. Hopefully this helps you understand and become better, but it sometimes helps us who know a bit to have problems to solve. We all contribute and see different ways of doing things.
If you make an effort, folks here are happy to help - and I've learned lots of new things. Most people will have joined to ask a question in the first place, so you are most welcome.
Whatever suits you best is a good way to steer us. Some tools like awk and their variants can take years to get a good understanding on. I'm still struggling along trying to learn, so don't get disheartened.
If you have a go, then show what you've done when asking for help the collective group will no doubt help you out.
you have ^M-s in your file - probably from ftp-ing from the Windows box.
Get rid of ^M-s in BOTH files and re-run the script, e.g. tr -d '\015' < file2 >file2_new
I'm sure that I do something stupid!!
when I think about it it's very simple but when I try to do it you saw how much I could!
and when I try to do what you suggested I'm not able to solve a tiny issue since it's beyond my knowledge.
Apologies there was a slight bug in there (I wasn't dividing by the number of values used).
I've tried to make it a little easier to understand and output the number of unique values found and what is summed, you can take out the red line if you don't want this extra info:
for file in *.txt
do
gawk -v p=50 '
{a[sprintf("%07d",$3)]}
END{
asorti(a,as)
end=length(a)
start=end-int(end*p/100)+1
if(start>end) start=end
printf("File has %d unique values, average from %d to %d\n", end, start, end)
for(i=start;i<=end;i++) total=total + as
printf "%s\t%.2f\n", FILENAME, total/(end-start+1)}' "$file"
done