Severe performance issue while 'grep'ing on large volume of data

Background

The Unix flavor can be any amongst Solaris, AIX, HP-UX and Linux. I have below 2 flat files.

File-1

Contains 50,000 rows with 2 fields in each row, separated by pipe.
Row structure is like Object_Id|Object_Name, as following:

111|XXX
222|YYY
333|ZZZ

File-2

Contains 5,000 rows with a single field in each row.
Each row basically represents a filename with full path, as below:

/app00/applmgr/aprod/appl/au/11.5.0/resource/XXAIMG_CUSTOM_11I.pld
/app00/applmgr/aprod/appl/xbol/11.5.0/forms/US/XXARTLONG.fmt
/app00/applmgr/aprod/appl/au/11.5.0/resource/XXINVIVCSU.pld

Task

I need to search for the occurances of each Object_Name (from each row of File-1) in all the 5000 distinct files (names stored in File-2) and get the search results stored in some 3rd file with below row structure. So the total no of loop iterations would be 250,000,000.

File_Name|Object_Id|Occurance_Count
eg,
/app00/applmgr/aprod/appl/au/11.5.0/resource/XXINVIVCSU.pld|222|13

Request

Please provide the shell scripting method to do the desired job in fastest possible time.

Thanks,
Souvik.

Did you mention the format and number of records of the files listed in File-2?
What is the most powerful computer you have of the ones you mention?
You mention a performance issue. What have you tried so far?

I think the amount of data is reasonable to fit in one awk like this:

# Make a temp file to hold the second column of file-1.
# We can feed the entire file into grep -f, reducing 5000 grep calls to 1.
TMP=`mktemp`
awk '{ print $2}' < file-1 > "$TMP"

# feed the list of filenames into xargs, which calls grep.  Force grep to
# print filenames with -H, force it to print only the matching bit
# with -o, tell it to use the patterns as fixed strings with -F, and tell it
# to use TMP as the fixed strings with -f.
# 
# It will print a bunch of lines like filename1:oid1.
# 
# Then we tell awk to count each unique line(turning : to | ) and print totals.
xargs grep -H -o -F -f "$TMP" < file-2 |
         awk -v OFS="|" -v FS=":"        \
                '{ C[$1 "|" $2]++; } END { for(k in C) print k,C[k]; }'

# clean up the temp file.
rm -f "${TMP}"

If the number of filenames is small enough, it'll run awk only twice, and grep only once, otherwise it will call grep as many times as necessary to open the 50,000 files and feed all its output through the one awk. If you're concerned about awk consuming too much memory, you can run grep | awk on individual files read from file-2 like

while read FILENAME
do
        grep -H -o -F -f "$TMP" "$FILENAME" | awk ...
done < file-2

this will be less efficient, running awk and grep 50,000 times instead of 50,000/ARG_MAX times, but depending on the size of the files may not be significant.

Results when run on my own test data:

$ ./extract.sh | sort
a/4|obj0|2
a/4|obj2|2
a/4|obj6|1
b/0|obj1|1
b/0|obj5|1
b/0|obj6|1
b/0|obj7|1
b/0|obj8|1
c/6|obj2|2
c/6|obj5|1
...
x/6|obj6|1
x/6|obj8|1
y/5|obj1|2
y/5|obj2|1
y/5|obj5|1
y/5|obj8|1
z/7|obj3|1
z/7|obj4|2
z/7|obj6|1
z/7|obj7|1

...where I'd created directories [a-z] with files [0-9] and put random object names from file-1 in them one per line. If you don't care what order you get the results in you can forget the sort.

Dear friend Corona688,

I just checked your reply. I am at home and would give a try on your suggestion tomorrow only. I will post feedback immediately after I test the results.

Anyways, many thanks to you for such a detailed advice.

You seem to be solid enough in shell scripting.

Best Regards,
Souvik.

This especially is germaine. If file-2 consists of records with delimited fields and if the only matching to be done is field-wide (no partial matches), then this problem could be solved in a manner that's simpler and more efficient than Corona's approach (which handles generic text formats).

Regards,
Alister

---------- Post updated at 01:29 PM ---------- Previous update was at 01:08 PM ----------

Hey, Corona688:

A few observations:

An FS=\| is necessary to properly split that file.

Testing with the only version of grep that I have that supports -o (gnu grep 2.5.1 on a disused laptop that seldom sees any action), if multiple patterns match a single line (not knowing anything about the content of the files, it's a possibility), it does not print the filename before every match (only the first), even with -H. If that output format has changed, apologies for the false alarm.

Actually, assuming the splitting on file-1 is done correctly and $2 is sent to the temp file, the grepping is being done on the object names, not the oids. The result of the grep and awk count will be filename|object name|count, but the desired output is filename|oid|count.

Regards,
Alister

Good point.

Nuts, I thought that was a portable option. Throw that whole script out, then.

I did note it expected one pattern per line.

Argh. My script doesn't work, then.

Looking less and less like there's really going to be an efficient solution if you have to parse that thing the hard way line by individual line on systems and shells with no features.

---------- Post updated at 01:31 PM ---------- Previous update was at 12:49 PM ----------

OK, here's a brute-force version in awk:

BEGIN   {       FS="|"  ;       OFS="|"
                # Read in list of ID's
                while(getline < "file-1")       n[$2]=$1;
        }

{
        for(i in n)     if(index(i, $0) > 0)
                o[FILENAME "|" n]++;
}

END     {       for(k in o)     print k, o[k];  }

Don' think it'll be quite as efficient as grep but ought to do. It ran in two seconds on 5,000 files cached, maybe 5-10 seconds uncached. I don't believe I used any GNU specific features either.

Put that in extract.awk and run with

xargs awk -f extract.awk < file-2

You can change o[FILENAME "|" n]++; to o[FILENAME "|" i]++; if I somehow got name vs ID backwards again.

---------- Post updated at 01:54 PM ---------- Previous update was at 01:31 PM ----------

Estimate that to work in 60 minutes on data similar to yours on a 'fast' machine. use 'xargs -n 10' to prevent it from eating all your memory. Maybe not so good. If there was some sort of pattern to the OIDs/names, that'd help a lot for finding them without having to brute-force check all 50,000 individually... Or if you knew for a fact you had or could get GNU grep, and didn't have more than one object per line, grep's hardwired performance is going to be way better than awk's scripted performance...

Hi Corona688,

Thank you very much. I used your tips (xargs grep) and could manage to get a significant improvement in performance.

Regards,
Souvik