UNIX - 2 tab delimited files, conditional column extraction

Please know that I am very new to unix and trying to learn 'on the job'. I'm only manipulating large tab-delimited files (millions of rows), but I'm stuck and don't know how to proceed with the following. Hoping for some friendly advice :slight_smile:

I have 2 tab-delimited files - with differing column & row numbers in each.

Please note: some of the 'names' in column 1 (in each file) are repeated a varying number of times. Each of the real files are millions of rows each.

File 1:

Sc1 10  20
Sc1 20  30
Sc1 30  40
Sc2 40  50
Sc2 50  60
Sc2 60  70
Sc3 70  80
Sc3 80  90
Sc3 90  100

File 2:

Sc1 22  23  100
Sc1 44  45  50
Sc2 60  61  25
Sc3 79  80  40
Sc3 100 101 40
Sc4 60 61 25

I would like to test the following:

IF the value in the first column & first row of File2 matches the value anywhere in the first column of File1, AND IF the second column & first row value of File2 is >= to the second column File1 but < the third column File1 (these would be on the corresponding row of File1 after the match established in the first 'IF'), THEN write the value from column 4 row 1 File2 to File3, ELSE write NA to File3.

After testing for every File2 column 1 values, File3 should look like the following:

100
NA
25
40
NA
NA

To help understand, I can achieve this within Excel by doing the following

Paste File 1 from cells A1:C9

Paste File 2 from cells D1:G5

In H1 write the following formula:

=IF(COUNTIFS($A$1:$A$9,D1, $B$1:$B$9,"<="&E1,$C$1:$C$9,">"&E1)>0,G1,"NA")

Copy formula down to H5.

But � the real world files exceed the size of Excel, so transferring this to Unix is my problem.

I have tried combining the above File1 and File2 data into one file and then using the following command:

awk '{if ($4=$1 && $5>=$2 && $5<$3) {print $7}}' file3.txt > file4.txt

But that only tests on a row level.

I feel like I need to use awk with NR==FNR but ensure where to start?

I am hoping I have made that clear, and that someone can kindly provide some assistance :slight_smile:

Many thanks

Ted (Completely new to unix and learning everyday)

Hi, see if this works:

awk -F'\t' '
  NR==FNR {
    if(!($1 in L))
      L[$1]=$2
    R[$1]=$3
    next
  }
  {
    print ($2>=L[$1] && $2<R[$1])?$4:"NA"
  }
' file1 file2
1 Like

Hey thanks for the help, I'm away from a unix box to test at the moment....... I'll try tomorrow and get back to you.

You're a legend! If this works hopefully you can help me understand the code lines :slight_smile:

May I ask why file1 defines multiple but contiguous intervals which could be done in a single line defining min and max? E.g. 10 - 20 , 20 - 30 , and 30 - 40 could be condensed to 10 - 40 for Sc1 (which is what Scrutinizer's code does internally, assuming as well that intervals come in ascending order).
Would it be correct to assume that multiple interval definitions are to allow gaps between intervals? If yes, try (with e.g. the Sc1 20 30 line missing in the sample file)

awk -F"\t" '
NR == FNR       {INT[$1] = INT[$1] $2 "-" $3 FS
                 next
                }
                {split (INT[$1], T)
                 OUT = "NA"
                 for (t in T)   {split (T[t], LM, "-")
                                 if ($2 >= LM[1] && $2 < LM[2]) OUT = $4
                                }
                 print OUT
                } 

' file1 file2
NA
NA
25
40
NA
NA

With file1 from post#1, it's

100
NA
25
40
NA
NA
1 Like

In reality the files I have, that are millions of rows in size, are not contiguous or even nearly as uniform as File1 (many gaps exist). The data sets (and results) would not make sense if File1 was collapsed to min-max for each Scxxx name. I just produced this one so that the boundaries were easy to see for everyone.

The output (file3) definitely needs to be the below from the test files supplied:

100
NA
25
40
NA
NA

Yes. How about posting a decent, representative sample of your data including gaps? So proposals given could be tested against samples reflecting reality?

1 Like

Apologies, if there was any confusion. Please find below some data which matches the reality of the files.

Note:
Column 1 in File 1 & 2 is not a complete list of Scxxxx names, some are missing.

The ranges within File 1 columns 2&3 are no longer contiguous although they do not overlap for each Scxxxxx name.

File 1		
Sc1	1	11
Sc1	18	36
Sc1	45	48
Sc2	23	50
Sc2	67	112
Sc2	235	245
Sc3	2	23
Sc3	29	40
Sc3	90	100
Sc5	44	99
Sc8	12	16
Sc8	18	22
Sc8	66	88
Sc8	111	119
Sc8	555	578
Sc10	20	27
Sc10	40	54
Sc10	80	88
Sc10	120	156
Sc10	170	222
File 2			
Sc1	22	23	100
Sc1	44	45	50
Sc2	238	239	25
Sc3	38	39	40
Sc3	100	101	40
Sc4	90	91	10
Sc5	46	47	22
Sc6	44	45	99
Sc7	1	2	100
Sc8	12	13	21
Sc8	84	85	12
Sc8	561	562	35
Sc10	42	43	90
Sc10	124	125	80
Sc11	999	1000	100

Desired output from the above files should be:

100
NA
25
40
NA
NA
22
NA
NA
21
12
35
90
80
NA

File 2 has 15 lines to test in this example, so I would expect 15 lines in the output.

Your help is very much appreciated.
Cheers

OK; and what be your results when applying either of the above proposals?

With the second set of test data:

Scrutinizers script incorrectly identifies the second line of file 2 as a match:

100
50 - This one should be NA
25
40
NA
NA
22
NA
NA
21
12
35
90
80
NA

Your script correctly identifies all records :slight_smile:

100
NA
25
40
NA
NA
22
NA
NA
21
12
35
90
80
NA

If you have the time could you please help me understand the code you've very kindly provided? Hopefully then I can write my own for similar tasks in the future.
Cheers

---------- Post updated at 11:31 AM ---------- Previous update was at 10:59 AM ----------

Hi - With an expanded dataset this script incorrectly matched the second line in file 2.

THAT's the right spirit that we after in these forums!

Here you go; further questions welcome (after having read the man page); have fun:

awk -F"\t" '                                                                    # start awk and define the field separator
NR == FNR       {INT[$1] = INT[$1] $2 "-" $3 FS                                 # for the first file, identified by total record No.
                                                                                # being equal to the file's NR, save intervals to an
                                                                                # array indexed by $1 as a list of L-R L-R L-R etc.
                 next                                                           # stop processing this line, start over with  next
                }
                                                                                # this is processed for second file only
                {split (INT[$1], T)                                             # split the interval list into individual L-R into
                                                                                # temp array T
                 OUT = "NA"                                                     # predefine OUT should no match be found
                 for (t in T)   {split (T[t], LM, "-")                          # loop across all individual L-R entries, split each 
                                                                                # one into limits array, with LM[1] holding L(eft)    
                                                                                # and LM[2] the R(ight) border
                                 if ($2 >= LM[1] && $2 < LM[2]) OUT = $4        # if $2 fits between limits, set OUT to $4
                                }
                 print OUT                                                      # and print it
                }
' file1 file2                                                                   # specify input files

Hugely appreciate the time you've taken to help me out. I'll now take sometime to break this down, read around, and hopefully digest :wink:

It runs in about 3 hours on the 'real' dataset.

You're a legend :slight_smile: