Reducing input file size after pattern search

I have a very large file with millions of entries identified by @M . I am using the following script to "extract" entries based on specific strings / patterns :

#!/bin/bash
if [[ -f $1 ]]
then
	file=$1
else
	echo "Input_file passed as an argument $1 is NOT found."
	exit;
fi
MID=(NULL "string-1" "string-2" "string-3" "string-4" )
tot=$(grep -c "^@" < "$file" )
echo "Total " "$tot" > log.txt

for y in {1..4}
do
	awk -v search="${MID[$y]}" '$2 ~ search { print $0 }' $file > MID-$y.txt
	awk -v Id="MID-$y" -v pct="$tot" '/^@M/ {count++} END { print Id "\t" (count*100)/pct }' MID-$y.txt >> log.txt
done

I believe it would be more "cost-effective" to reduce the "size" of the input file by eliminating the entries that have been already "extracted" during the initial loops. Thus, by the time the last strings are being searched, the processing time would have been significantly reduced. I was wondering what would be the most efficient way to accomplish such task considering that I am dealing with a sizable infile?
Thanks in advance!

One single awk script, no grep, nothing else.

Read the input file once, keep a running total for the variable tot. For the array count[],
use a variable MID to decode which of these to increment. Index count by the element MID[].

Print the final totals in an

END{}

clause.

Since I do not get why you use "^@" and "^@M" for search patterns on the same records you've already searched, I'm not happy doing an example.

Please become accustomed to provide decent context info of your problem.
It is always helpful to support a request with system info like OS and shell, related environment (variables, options), preferred tools, and adequate (representative) sample input and desired output data and the logics connecting the two, to avoid ambiguities and keep people from guessing.

Totally seconding jim mcnamara, some hints on condensing your script into one single awk script, just trying to translate your code, no reasonable testing possible:

awk -vSARR="string-1 string-2 string-3 string-4" '
BEGIN   {for (MX=n=split (SARR, TMP); n>0; n--) SRCH[TMP[n]]
        }
/^@/    {tot++
        }
        {for (s in SRCH) if (($2 ~ s) && /^@M/) count++
        } 
END     {print "Total:", tot
         for (s in SRCH) print "MID" s "\t" count/tot*100
        }
'  $1

OS=biolinux 8
preferred tools: AWK

Mini input file:

@M03333 AGCTGTGAstring-1GATCAGTGCATGA
+
CCCCCCCCCCCCCCGGGGGGGGG;;;;.,..,.
@M03333 AGCTGTGAstring-2GATCAGTGCATGA
+
CCCCCCCCCCCCCCGGGGGGGGG;;;;.,..,.
@M03333 AGCTGTGAstring-3GATCAGTGCATGA
+
CCCCCCCCCCCCCCGGGGGGGGG;;;;.,..,.
@M03333 AGCTGTGAstring-4GATCAGTGCATGA
+
CCCCCCCCCCCCCCGGGGGGGGG;;;;.,..,.
@M03333 AGCTGTGAstring-1GATCAGTGCATGG
+
CCCCCCCCCCCCCCGGGGGGGGG;;;;.,..,.
@M03333 AGCTGTGAstring-1GATCAGCCCATGA
+
CCCCCCCCCCCCCCGGGGGGGGG;;;;.,..,.
@M03333 AGCTGTGAstring-2CCATCAGTGCATGA
+
CCCCCCCCCCCCCCGGGGGGGGG;;;;.,..,.
@M03333 AGCTAAGAstring-2GATCAGTGCATGA
+
CCCCCCCCCCCCCCGGGGGGGGG;;;;.,..,.

Output files:
log.txt file:

Total  8
MID-1	37.5
MID-2	37.5
MID-3	12.5
MID-4	12.5

MID-1.txt file:

@M03333 AGCTGTGAstring-1GATCAGTGCATGA
@M03333 AGCTGTGAstring-1GATCAGTGCATGG
@M03333 AGCTGTGAstring-1GATCAGCCCATGA

MID-2.txt file:

@M03333 AGCTGTGAstring-2GATCAGTGCATGA
@M03333 AGCTGTGAstring-2CCATCAGTGCATGA
@M03333 AGCTAAGAstring-2GATCAGTGCATGA

MID-3.txt file:

@M03333 AGCTGTGAstring-3GATCAGTGCATGA

MID-4.txt file:

@M03333 AGCTGTGAstring-4GATCAGTGCATGA

As I tried to explain but obviously failed to convey is that my bash script outputs all desired files (log plus MID files).
Now, what I would like to change is this part:

awk -v search="${MID[$y]}" '$2 ~ search { print $0 }' $file > MID-$y.txt

In my script, for each and every loop, the entire input file is scanned searching for the strings .
Ideally, the input file should be reduced accordingly after each loop. Thus, for the second loop, the entries "extracted" during the first loop will not be "read"; therefore, reducing the processing time. For the third loop, all entries extracted in loops 1 and 2 would not be read either. So on and so forth. As a result, the processing time for the last loops would be significantly smaller since the file is getting smaller with each loop.
I thought about including the following pieces in my loop:

	awk -v search="${MID[$y]}" '$2 !~ search { print $0 }' $file > New-$file
	mv New-$file $file

However, considering that the original input file is pretty large, the process of rewriting the input file during each loop besides looking horrible in the script, might not save that much of time. In a nutshell, I am trying to simplify the input file after each loop to save time during the last loops.
I hope this clarifies what I am trying to accomplish
Thanks!

Did you even consider what Jim McNamara said and what I tried to cast into some sample code? Reading AND WRITING a large file multiple times - even slightly reduced in size - is unnecessary a task and load on the system. Adapting (and even simplifying) that cited sample code to your sample input and output:

awk -vSARR="string-1 string-2 string-3 string-4" '
BEGIN   {for (MX=n=split (SARR, TMP); n>0; n--) SRCH[TMP[n]] = n
        }
/^@M/   {tot++
         for (s in SRCH) if ($2 ~ s)    {count++
                                         print > ("MID-" SRCH ".txt")
                                        }
        } 
END     {print "Total:", tot
         for (s in SRCH) print "MID-" SRCH "\t" count/tot*100
        }
' file
Total: 8
MID-1    37.5
MID-2    37.5
MID-3    12.5
MID-4    12.5

cf M*
MID-1.txt:
@M03333 AGCTGTGAstring-1GATCAGTGCATGA
@M03333 AGCTGTGAstring-1GATCAGTGCATGG
@M03333 AGCTGTGAstring-1GATCAGCCCATGA
MID-2.txt:
@M03333 AGCTGTGAstring-2GATCAGTGCATGA
@M03333 AGCTGTGAstring-2CCATCAGTGCATGA
@M03333 AGCTAAGAstring-2GATCAGTGCATGA
MID-3.txt:
@M03333 AGCTGTGAstring-3GATCAGTGCATGA
MID-4.txt:
@M03333 AGCTGTGAstring-4GATCAGTGCATGA

seems to give exactly what you're after in ONE SINGLE read of the input file - how large it ever be.

1 Like

Hi Xterra,
I think you don't understand what is being suggested. If you have a file containing a million records, each of those records has a 1st line that is one of four values, and you want to create four output files where each of those output files contains all records that have the same 1st line; then you do not want to read that input file 4 times. You want to read it once and create all of your 4 output files in one pass. Doing this you read a million records, write a million records, and you're done.

What you are asking to do instead is read a million records, write ~250000 records to one file, and write ~750000 records to another file; then you read ~750000 records, write ~250000 records to one file, and write ~500000 records to another file; then you read ~500000 records, write ~250000 records to one file, and write ~250000 records to another file; and then you read ~250000 records, write ~250000 records to one file and write 0 records to another file. Why would you want to read ~2.5 million records and write ~2.5 million records instead of reading 1 million records and write 1 million records?

The code that you currently have is reading 4 million records and writing 1 million records (i.e., 5 million I/O operations). What you are asking to do would read 2.5 million records and write 2.5 million records (i.e., 5 million I/O operations). Even if we skip the last read and write and just rename one of the last two output files, your plan still has 4.5 million I/O operations instead of the 2 million I/O operations being proposed by RudiC and jim mcnamara.

Is there something else that you haven't told us about your data that would affect what I assume you are trying to do?

1 Like

Jim, Rudy and Don
I deeply apologize! Indeed, I did not read well/understand the code and Jim's suggestion when they were first posted. I see the advantages over what I wrote and I am trying to dissect it. Quick question, and for a different application, if my infile has the actual sequence in the second line of the record, something like this:

@M03333 
AGCTGTGAstring-1GATCAGTGCATGA
+
CCCCCCCCCCCCCCGGGGGGGGG;;;;.,..,.
@M03333 
AGCTGTGAstring-2GATCAGTGCATGA
+
CCCCCCCCCCCCCCGGGGGGGGG;;;;.,..,.
@M03333 
AGCTGTGAstring-3GATCAGTGCATGA
+
CCCCCCCCCCCCCCGGGGGGGGG;;;;.,..,.
@M03333 
AGCTGTGAstring-4GATCAGTGCATGA
+
CCCCCCCCCCCCCCGGGGGGGGG;;;;.,..,.
@M03333 
AGCTGTGAstring-1GATCAGTGCATGG
+
CCCCCCCCCCCCCCGGGGGGGGG;;;;.,..,.
@M03333 
AGCTGTGAstring-1GATCAGCCCATGA
+
CCCCCCCCCCCCCCGGGGGGGGG;;;;.,..,.
@M03333 
AGCTGTGAstring-2CCATCAGTGCATGA
+
CCCCCCCCCCCCCCGGGGGGGGG;;;;.,..,.
@M03333 
AGCTAAGAstring-2GATCAGTGCATGA
+
CCCCCCCCCCCCCCGGGGGGGGG;;;;.,..,.

And I would like to output the entire record using Rudi's code, e.g. for outfile file MID-1.txt:

@M03333 
AGCTGTGAstring-1GATCAGTGCATGA
+
CCCCCCCCCCCCCCGGGGGGGGG;;;;.,..,.
@M03333 
AGCTGTGAstring-1GATCAGTGCATGG
+
CCCCCCCCCCCCCCGGGGGGGGG;;;;.,..,.
@M03333 
AGCTGTGAstring-1GATCAGCCCATGA
+
CCCCCCCCCCCCCCGGGGGGGGG;;;;.,..,.

I would need to change the RS to \n , correct? How could I modify Rudi's code so I can append the two other lines?

With the code RudiC suggested, RS is already set to the default <newline> character.

Are there any @ characters in your input file other than the 1st character of each (multi-line) record?

Could the strings that you are searching for appear on any line other than the 2nd line in a record?

Yes, that's why I originally decided to use ^@M since there is only one ^@M per record -always at the beginning.

Could the strings that you are searching for appear on any line other than the 2nd line in a record?

No. The DNA sequence is always the second line of each record.
PS. I meant to say FS not RS

The following was written and tested using a Korn shell, but will work with any POSIX-conforming shell. It does, however, depend on the version of awk that you are using allowing multi-character record separators. (The standard allows awk to use multi-character RS value; but only requires that awk use the 1st character of RS . The GNU awk available on most Linux systems does this, so I assume it will work on your biolinux 8 system:

#!/bin/ksh
IAm=${0##*/}
if [ $# -ne 1 ]
then	printf 'Usage: %s input_file\n' "$IAm" >&2
	exit 1
fi
file=$1

awk -v strings="string-1 string-2 string-3 string-4" '
BEGIN {	RS = "@M"
	FS = "\n"
	ns = split(strings, s, / /)
}
FNR > 1 {
	for(i = ns; i > 0; i--)
		if(index($2, s)) {
			cnt++
			break
		}
	c++
	printf("%s%s", RS, $0) > ("MID-" i ".txt")
}
END {	printf("Total\t%d\n", cnt)
	for(i = 1; i <= ns; i++) {
		close("MID-" i ".txt")
		printf("MID-%d\t%.1f\n", i, cnt ? 100 * c / cnt : 0)
	}
	if(c[0])
		printf("\n%d unmatched record%s written to MID-0.txt\n", c[0],
		    (c[0] > 1) ? "s" : "")
}' "$file"
1 Like

Don
Thank you very much! That worked like a charm.