Join multiple files by column with awk

Hi all,
I searched through the forum but i can't manage to find a solution. I need to join a set of files placed in a directory (~1600) by column, and obtain an output with first and second column common to each file, but following columns are taken from the file in the list (precisely the fourth column of the file). I'll show the input and desired output for more clarity:

File 1:

name    Chr    Position    Log R Ratio    B Allele Freq
cnvi0000001    5    164388439    -0.4241    0.0097
cnvi0000002    5    165771245    0.4448    1
cnvi0000003    5    165772271    0.4321    0
cnvi0000004    5    166325838    0.0403    0.9971
cnvi0000005    5    166710354    0.2355    0

File 2:

name    Chr    Position    Log R Ratio    B Allele Freq
cnvi0000001    5    164388439    0.0736    0
cnvi0000002    5    165771245    0.1811    1
cnvi0000003    5    165772271    0.2955    0.0042
cnvi0000004    5    166325838    -0.118    0.9883

File 3:

name    Chr    Position    Log R Ratio    B Allele Freq
cnvi0000001    5    164388439    0.2449    0
cnvi0000002    5    165771245    -0.0163    1
cnvi0000003    5    165772271    0.3361    0
cnvi0000004    5    166325838    0.0307    0.9867
cnvi0000005    5    166710354    0.1529    0

(note that File 2 has a missing line)

Output:

chr    Position    File1   File2   File3
5    164388439    -0.4241    0.0736    0.2449
5    165771245    0.4448    0.1811    -0.0163
5    165772271    0.4321    0.2955    0.3361
5    166325838    0.0403    -0.118    0.0307
5    166710354   0.2355                  <tab_separator> 0.1529

Now, I managed to join by column all files using:

awk '{
   if (x[FNR])
      x[FNR] = sprintf("%s\t%s", x[FNR], $4)
   else
      x[FNR] = $0
}  END {
   for (i=1;i<=FNR;++i)
       print x
}'

but this insert all columns from the first file and next join columns from others files without the insertion of a tab separator or an empty field if there is some file with missing lines, obtaining this (after the manual removal of useless columns):

Output:

chr    Position    File1   File2   File3
5    164388439    -0.4241    0.0736    0.2449
 5    165771245    0.4448    0.1811    -0.0163
 5    165772271    0.4321    0.2955    0.3361
 5    166325838    0.0403    -0.118    0.0307
 5    166710354   0.2355     0.1529

but as I need this huge file as input to another program, this is not right. now I've tried this solution:

awk 'NR==FNR{ llr[$1]=$4; p[$1]=$2"\t"$3; next } {
    if(llr[$1]){
        p[$1] = p[$1]"\t"llr[$1]; llr[$1]=$4
    }else{
    llr[$1]="\t";
    p[$1] = p[$1]"\t"llr[$1];
    }      
}
END{for(i in p) {
    print p
}}' 

after reading this AWK - Difference in multiple files

But it doesn't work in the desired way. I have the same output of the first script (but only with useful columns).
I hope I have been clear enough.
If anyone has some ideas, any help will be welcome!
Bye, Macsx

ps. actually I don't matter how the file header is, i can create it by hand.

1 Like

Hi macsx82,

Well using Franklin52 script in thread you suggested here I get this approach, but partially does what you need, maybe
AWK experts may correct and enhance this script or give us a new better solution.

WHINY_USERS=1 awk 'BEGIN{ print "chr","Position"} NR==FNR{ a[$1]=$4; s[$1]=$2 " " $3 " " $4; next } {
  s[$1] = s[$1] " " $4;
}
END{for(i in s) {print s}}' file*

chr Position
5 164388439 -0.4241 0.0736 0.2449
5 165771245 0.4448 0.1811 -0.0163
5 165772271 0.4321 0.2955 0.3361
5 166325838 0.0403 -0.118 0.0307
5 166710354 0.2355 0.1529

This script uses file1, file2 and file3 as input file.

Note:
It�s still needed: (missing things are shown in red)
1-) Add headers to respective files; and
2-) Manage better the missing line in file2 in order to locate "blank values" in correct positions.

chr Position file1 file2 file3
5 164388439 -0.4241 0.0736 0.2449
5 165771245 0.4448 0.1811 -0.0163
5 165772271 0.4321 0.2955 0.3361
5 166325838 0.0403 -0.118 0.0307
5 166710354 0.2355  <tab delimiter> 0.1529 

Hi Cgkmal!
Thanks for your code-improvement! this is sure more "awk-ish"! At the moment I'm trying to solve the problem using an R-script, but as I can see it is really slow, so I'll keep on trying the awk-way! I've just found another post that could be useful! This:

I'll try and post! Thanks again!

If you don't have to use awk, the result you ask for can perhaps be achieved using basic shell tools and sed.

Eg. if the files are called file1, file2 and file3 and have the headers stripped off you could achieve the desired result this way:

$ cat file1
cnvi0000001 5 164388439 -0.4241 0.0097
cnvi0000002 5 165771245 0.4448 1
cnvi0000003 5 165772271 0.4321 0
cnvi0000004 5 166325838 0.0403 0.9971
cnvi0000005 5 166710354 0.2355 0
$ cat file2
cnvi0000001 5 164388439 0.0736 0
cnvi0000002 5 165771245 0.1811 1
cnvi0000003 5 165772271 0.2955 0.0042
cnvi0000004 5 166325838 -0.118 0.9883
$ cat file3
cnvi0000001 5 164388439 0.2449 0
cnvi0000002 5 165771245 -0.0163 1
cnvi0000003 5 165772271 0.3361 0
cnvi0000004 5 166325838 0.0307 0.9867
cnvi0000005 5 166710354 0.1529 0
$ paste file* | sed -e 's/\t\t/\t     /g;s/\t/ /g;s/ /\t/g' | cut  -f 2,3,4,9,14
5       164388439       -0.4241 0.0736  0.2449
5       165771245       0.4448  0.1811  -0.0163
5       165772271       0.4321  0.2955  0.3361
5       166325838       0.0403  -0.118  0.0307
5       166710354       0.2355          0.1529

This assumes that the fields in file1, file2 and file3 are separated with spaces. If they are sepatared by tabs, the sed command has to be modified so it prints tabs instead of spaces, but I didn't test it.

EDIT: You should post more details of the data you need to manipulate. For example, if more than one files are shorter than the others, the above will not work reliably. Also, is it possible that some records are skipped? For example a line starting with cnvi0000004 is immediately followed by a line starting with cnvi0000006. The best solution in my opinion would be to preprocess the files so they are of equal length of lines, and insert "empty" data for missing fields, eg. "-".

EDIT2: A more robust sed command handling possible consecutive empty records:

$ paste file* file2 file2 file3 | sed -e 's/\([^\t]\)\t/\1 /g;s/\t/     /g;s/\t/ /g;s/ /\t/g' | cut  -f 2,3,4,9,14,19,24,29
5       164388439       -0.4241 0.0736  0.2449  0.0736  0.0736  0.2449
5       165771245       0.4448  0.1811  -0.0163 0.1811  0.1811  -0.0163
5       165772271       0.4321  0.2955  0.3361  0.2955  0.2955  0.3361
5       166325838       0.0403  -0.118  0.0307  -0.118  -0.118  0.0307
5       166710354       0.2355          0.1529                  0.1529

Hi Ikki!
Thanks for your reply! I use awk bacause I'm more familiar with its sintax, i've just used sed a couple of time!
My files contain datas from genetic chips, and each file belongs to a person.
Each file has ~360000 lines.
You're right, I have more than one file shorter than the other..at the moment there are 95 shorter files. But they could increase, and yes, in shorter files it's possible that a line starting with cnvi0000004 is followed by a line starting with cnvi0000006 or cnvi0000008...it depends on how many records are missing for this person, but all input file are sorted by the first column.
As I said I've written an R-script that works, but it is extremely slow. In this script I compare a list of "complete" names with another and see if there are differences. Once i found elements that aren't in the short list, i add them in this list in order to have elements with same length. In this way I can merge all column and insert tabs instead of missing datas. I post the R-code:

#define file path
files_path="/home/###/###/people/"

#read all file names in the directory and save in a vector
only_files <- dir(path=files_path, pattern = "*.in") 
files = paste(files_path,only_files, sep="")

#load files to create the "complete list" I need the first column that contain the name of the record
tot_file <- read.table(files[1], sep="\t", header=TRUE)[c(1,2,3)]
tot_file_noname <- cbind(Chr=tot_file$Chr, Position=tot_file$Position)


for (i in 1:length(files)) { 
#
        xx_file <- read.table(files, sep="\t", header=TRUE)[c(1,3,4)]
        xx_file_noname <- cbind(xx_file$Position, xx_file$Log.R.Ratio)

#now I read each file and if i find some mismatch from the complete list 
#I add them in the current xx_file object with value "NaN"

    if (length(xx_file$name) != length(tot_file$name)){
                print('different!')
                mismatch=NULL

                match <- tot_file$name %in% xx_file$name
                                    
                for(i in 1:length(match)){ if (match== FALSE){ mismatch = c(mismatch,i)}}

                missing_snp = NULL
# add missing values
                for (i in mismatch){
                    missing <- data.frame(Position = tot_file[i,]$Position, Log.R.Ratio="NaN")
                    missing_snp <- rbind(missing_snp, missing)
                }

                    xx_file_noname <- rbind(xx_file[,c(2,3)], missing_snp)
    }else{
        print('equals!')        
    }    

    tot_file_noname = cbind(tot_file_noname, xx_file_noname[,2])
}

# write the "big" file
write.table(tot_file_noname, file = "gigante.dat", append = FALSE, quote = FALSE, sep = "\t", eol = "\n", na = "NaN", dec =".", row.names = FALSE, col.names =TRUE)

Now I'm trying to port this in a shell script to have a faster response. My purpose was to avoid files preprocessing if i can, because of the large amount of data stored in each file, if possible.
I tried your command also, and it is running, the only problem is to add manually 1664 columns for the cut command, but I think I can work on It!
Hope I have been clear enough, and greatly appreciate your help!

Hi!
In this weekend I wasn't able to work on my script, but i found out that the R script that I made took only six hours on our powerful server to run, so now I had my 4.5G file!! :slight_smile: but today I'll work on the port to bash script...it could be useful!..thanks all for the great help!

So, I pondered your problem a bit. Your task isn't one that requires much processing power, instead the most likely bottleneck is file I/O. If you need to generate this kind of report rarely (say, once a month), then six hours doesn't seem too long.

If it's daily task, or more importantly if you need to generate multiple types of reports often, I'd consider importing the data into a real database. This assumes the data is somewhat static (and even if it isn't.. it could be written directly into the db, depending from the source of your data).

If a database's a no-go and performance has to be acquired through optimizing the code, I think one obvious place to optimize is reading in. Perhaps you could read in "bursts" filling a file specific buffer in one read. However I don't know anything about R-scripts, so you're on your own there.

I did a mock-up of the data (4 files with 360000 lines each) and wrote a perl script to do the heavy lifting. On my 500Mhz pentium it performed this way: processing of only one input file took 70 seconds and processing of 4 input files took 236 seconds. If we use these files for basis of how much time 1700 files would take (we really can't reliably) 33 hours and 27.9 hours, respectively.

I'll paste the code here if you want to play with it. It takes the filenames as input. It ignores a line if there's no values in it, and it doesn't get confused if some records are missing.

#!/usr/bin/perl

use strict;
use warnings;

my @if = ();    # array of input files
my $ignore_first_line = 1; #

# open all files
while ( <STDIN> ) {
        chomp;
        if ( -r $_ ) {
                my $index = @if;
                open( $if[ $index ]->{ handle }, "<", $_) or die "Couldn't open file $_: $!";
                $if[ $index ]->{ name } = $_; # save the filename
                $if[ $index ]->{ F }[0] = -1; # set default pos value for this file to "unread"
                if ( $ignore_first_line ) {
                        my $dummy_fh = $if[ $index ]->{ handle };
                        my $dummy = < $dummy_fh >;
                }
        }
}

# print the header
print "chr\tPosition";
for ( 0 .. $#if ) {
        print "\t$if[$_]->{name}";
}
print "\n";

my $pos = 0;    # pos indicates which record we're dealing with

# let's loop the files until all are read thru
while ( 1 ) {
        my $ofc = 0;    # open filehandle count
        my $str = "";   # build the infoline here
        my $ref = undef;
        ++$pos;                 # increase the line position

        # loop thru all files
        for my $index ( 0 .. $#if ) {
                if ( defined ( $if[$index]->{handle} ) ) { # check if the file is open and we can read from it
                        ++$ofc;
                        if ( $if[$index]->{F}[0] < $pos ) {
                                my $handle = $if[$index]->{handle}; # save filehandle to a temp variable
                                if ( defined ( $if[$index]->{line} = <$handle> ) ) {
                                        @{$if[$index]->{F}} = split(/\s/, $if[$index]->{line});
                                        $if[$index]->{F}[0] =~ s/.*?(\d+)/$1/; # save only the number, eg. from cnvi0000003
                                }
                                else {
                                        $if[$index]->{handle} = undef; # close filehandle
                                }
                        }

                        if ( defined ( $if[$index]->{handle} ) and $if[$index]->{F}[0] == $pos ) {
                                # according to position we'll print this data now
                                # also save a reference to the data so we can print
                                # character and position later
                                $ref = $if[$index]->{F};
                                $str .= "\t" . $if[$index]->{F}[3];
                        }
                        else {
                                $str .= "\t"; # empty record
                        }

                }
                else {
                        $str .= "\t"; # empty record
                }
        }

        if ( defined ( $ref ) ) {
                print "$$ref[1]\t$$ref[2]$str\n";
        }

        last unless $ofc;
}

I have a similar situation to this but mine must be simpler though! But I cant seem to figure out how to solve the problem.

I have 100 file each with a header of up to 11 lines and the number of columns and lines are the same in all files.

I waaant to get the first and second column of the first file and then then get the 2nd column of the rest of the files and combine them into one file.

Following this I have the code below but which seems to work but the mistake is it sorts the output with repsect to the first column but thats not what I waant.

#! /bin/bash
# reset
#title(n) = sprintf("column %d", n)

#set yrange [0:20]
#set xrange [0.35:2.5]

WHINY_USERS=0 awk 'NR==FNR{ a[$1]=$2; s[$1]=$1 " " $2; next } {
  s[$1] = s[$1] " " $2; a[$1]=$2
}
END{for(i in s) {print s}}' ~/test/*.txt

and here are 2 example files

File_1

; xfAzisum output file
; fits: e20011730124_xform.fits
; date: Sun Aug 29 15:30:46 2010
; inner radius: 2.2600000
; thickness: 0.500000
; divisions: 128
; x-axis: magnetic local time
;
; MLT       value      std    num_pixels	num_ind_measurements
; --------- --------- --------- ----------	--------------------
       0.09     19.83     14.58     78.00     15.00
       0.28     16.37      8.71     88.00     16.00
       0.47     23.62     14.32     85.00     15.00
       0.66     21.05     15.55     87.00     18.00
       0.84     27.06     14.25     88.00     14.00
       1.03     40.82     16.26     85.00     16.00
       1.22     43.94     14.44     87.00     14.00
       1.41     57.34      8.14     88.00     15.00
       1.59     67.33     15.14     87.00     15.00
       1.78     59.81     14.15     87.00     15.00
       1.97     76.75     23.44     85.00     14.00
       2.16     81.19     33.64     89.00     14.00
       2.34     67.60     25.53     86.00     13.00
       2.53     88.59     27.84     87.00     14.00
       2.72     74.00     22.88     87.00     14.00
       2.91     95.32     32.64     81.00     14.00
       3.09     91.51     29.59     95.00     15.00
       3.28    108.04     20.41     87.00     13.00
       3.47     85.54     24.75     87.00     13.00
       3.66     90.88     32.68     86.00     13.00
       3.84     79.36     28.87     89.00     15.00
       4.03     85.57     31.73     85.00     13.00
       4.22     80.39     28.05     87.00     13.00
       4.41     80.41     27.46     87.00     15.00
       4.59     77.25     21.63     88.00     14.00
       4.78     72.69     23.48     87.00     14.00
       4.97     69.76     24.77     85.00     15.00

File_2

; xfAzisum output file
; fits: e20011730225_xform.fits
; date: Sun Aug 29 15:30:48 2010
; inner radius: 2.2600000
; thickness: 0.500000
; divisions: 128
; x-axis: magnetic local time
;
; MLT       value      std    num_pixels	num_ind_measurements
; --------- --------- --------- ----------	--------------------
       0.09     23.50     15.69     78.00     12.00
       0.28     29.01     13.76     88.00     12.00
       0.47     26.51     14.09     85.00     10.00
       0.66     27.74     14.19     87.00     12.00
       0.84     28.46     14.08     88.00     11.00
       1.03     31.00     19.09     85.00     10.00
       1.22     36.56     16.43     87.00     12.00
       1.41     41.90     16.05     88.00     12.00
       1.59     49.73     17.51     87.00     12.00
       1.78     67.46     21.26     87.00     13.00
       1.97     67.41     24.18     85.00     10.00
       2.16     66.96     22.83     89.00     13.00
       2.34     79.56     16.04     86.00     10.00
       2.53     75.30     14.85     87.00     11.00
       2.72     77.60     20.36     87.00     10.00
       2.91     75.49     21.37     81.00      9.00
       3.09     92.31     19.54     95.00     14.00
       3.28     83.30     19.47     87.00     11.00
       3.47     89.87     18.38     87.00     11.00
       3.66     80.11     22.17     86.00     11.00
       3.84     92.18     28.36     89.00     12.00
       4.03     96.61     27.01     85.00     14.00
       4.22     91.94     28.70     87.00     10.00
       4.41     95.22     32.53     87.00     11.00
       4.59     89.51     30.41     88.00     12.00
       4.78     79.13     21.77     87.00     13.00
       4.97     71.90     17.68     85.00     12.00
       5.16     75.75     13.20     88.00     10.00
       5.34     61.50     17.21     87.00     11.00
       5.53     62.85     15.60     85.00     11.00
       5.72     60.16     23.02     88.00     12.00
       5.91     58.88     12.69     78.00     12.00
       6.09     53.16     11.01     97.00     13.00
       6.28     59.17     17.71     88.00      9.00
       6.47     75.35     18.00     85.00     13.00
       6.66     85.04     18.50     87.00     14.00
       6.84     86.22     14.26     88.00     12.00
       7.03     94.68     17.87     85.00     10.00
       7.22    102.22     23.22     87.00     10.00
       7.41    108.77     20.58     88.00     11.00
       7.59    108.88     20.75     87.00     11.00
       7.78    105.19     20.57     87.00      9.00
       7.97    105.75     25.69     85.00     10.00
       8.16     98.74     24.04     89.00     12.00
       8.34    100.46     30.22     86.00     12.00
       8.53     97.77     27.85     87.00     11.00
       8.72    108.62     29.81     87.00     14.00
       8.91    105.22     29.87     81.00     12.00
       9.09    108.14     25.23     95.00     15.00
       9.28    116.98     23.84     87.00     13.00
       9.47    112.20     19.08     87.00     12.00
       9.66    112.63     32.53     86.00     13.00
       9.84    136.50     37.32     89.00     14.00
      10.03    135.01     26.41     85.00     12.00
      10.22    153.68     21.48     87.00     12.00
      10.41    147.13     19.67     87.00     12.00
      10.59    140.11     21.85     88.00     12.00
      10.78    124.04     25.96     87.00     12.00
      10.97    124.65     31.79     85.00     13.00

the script above almost does what I want, but the problem is how it sorts the final output with respect to the first column.

Please help!
Thanks

I usually prefer perl over any shell script for this kind of jobs, but I am propably biased since I don't really know awk well enough.

However, here's a simple implementation using bash. It may be necessary to modify depending on what kind of OS/tools you've got under you. If tail doesn't understand the + -prefix, grep -v ^\; might be used as a substitute.

Number of lines didn't match in the example files, but I assume that was a mistake. This script assumes they are of same length, or at least that the first file is longest.

This is used so that filenames to process are given on stdin, not on command line.

Eg.

$ find ~/test/ -name '*.txt' | ./add_second_columns.sh > results.txt
#!/bin/bash
# add_second_columns.sh

# every file has a header of 11 lines, which we'll remove before processing
HEADER_LENGTH=11
# every file has columns separated by a char
COL_SEP=" "
# output column separator
COL_OSEP=$'\t'
# if there's an error, using a number greater than 0 will exit with that
# number
EXIT_ON_ERROR=1

# we'll get the files from stdin, so we should never run into the argc/argv
# problem

# later on we'll avoid pipes in while loop by saving the output of a file
# to a string and parsing it by newlines, so here we'll set input field
# separator to a newline
IFS=$'\n'

# we'll handle the first file separately, since we want to get the
# first two fields, instead of the second field only
read FIRST
if [ -e "$FIRST" ] ; then
	I=0
	RDATA=`<"$FIRST" tail -n +$HEADER_LENGTH | cut -d "$COL_SEP" -f -2`
	for E in $RDATA ; do
		((++I))
		DATA[$I]=$(echo "$E" | tr "$COL_SEP" "$COL_OSEP")
	done
else
	echo "ERROR: Couldn't open file '$FIRST'" >&2
	if [ $EXIT_ON_ERROR -gt 0 ] ; then
		exit $EXIT_ON_ERROR
	fi
fi

# process the rest of files
while read ENTRY ; do
	if [ -e "$ENTRY" ] ; then
		I=0
		RDATA=`<"$ENTRY" tail -n +$HEADER_LENGTH | cut -d "$COL_SEP" -f 2`
		for E in $RDATA ; do
			((++I))
			# schlemiel the painter, anyone?
			DATA[$I]="${DATA[$I]}${COL_OSEP}$E"
		done
	else
		echo "ERROR: Couldn't open file '$ENTRY'" >&2
		if [ $EXIT_ON_ERROR -gt 0 ] ; then
			exit $EXIT_ON_ERROR
		fi
	fi
done

# print everything
I=0
while [ $I -le ${#DATA[*]} ] ; do
	((++I))
	echo "${DATA[$I]}"
done

Thanks iki,

That worked! Thanks! The fields in my data files are delimited by 5 spaces and so I couldn't get the 'cut' part to work correctly. When I changed the COL_SEP to have 5 spaces it complained that the delimiter must be 1 character. I don't know why that does not want to work. So I used 'awk' in place of 'cut'. I still want to know how to use cut when the delimiter is more than 1 space

Thanks again

It's not possible with cut, it's designed to work with a single character as the field delimeter, that's why awk is propably the better choice here. Other way, if one really wants to use cut or perhaps just to sanitize data, would be to replace all multiple repetitions of space by a single space (or tab). That of course assumes that any field can't be empty (or non-space, to be exact), or the result would be skewed.

For future reference: there are at least two easy ways of squeezing multiple instances of a char to a single instance (below). sed is a powerful companion to awk, or so I'm told :), but even alone I find it a very useful tool:

sed -e 's/ \{1,\}/ /g' | cut... # I usually use the gnu version, since I have easier time remembering what chars I should escape and what not
tr -s " " | cut...