Search multiple patterns in multiple files

Hi,
I have to write one script that has to search a list of numbers in certain zipped files.
For eg. one file file1.txt contains the numbers. File1.txt contains 5,00,000 numbers and I have to search each number in zipped files(The number of zipped files are around 1000 each file is 5 MB)
I have to search each number in zipped and if number is not there in any zipped file then I have to send the output to a file .

file1.txt
--------

7234834
2342346
65745654634
345423534
.
.
.
.
783458934
345345

Search all these numbers in zipped files.

abc.txt.gz.processed
xyz.txt.gz.processed
ere.txt.gz.processed
gfdf.txt.gz.processed
dfg.txt.gz.processed
dgg.txt.gz.processed
.
.
.
kjh.txt.gz.processed

outputfile.txt

number 35345, not found.
number 345345, not found.
number 87979 not found.
number 234234234, not found.
.
.
.
number 234234234, not found.
number 234234234, not found.

[/COLOR][/COLOR]Sample zipped file format:(I am providing 2 records of the zipped file)

KKKKK 1454545345 842011011920025500000001287009909427909 031378055730681 KKKKKK AAA MMMMMMM034535345345345345
.
.
.
.
 
KKKKK 1454545345 842011011920025500000001287009909427909 03156456456546 KKKKKK AAA MMMMMMM034535345345345345

Red item is the number to search.

I wrote 1 script ..but it is taking too much time. it is taking around 2 minutes to search 1 number. So to serach all numbers it will take 5,00,000 * 2 minutes..Not a feasible solution. Because I have to run this script daily.If I run the command in the background, then unix throws the error that it can't fork process too much processes.

The script that I wrote is:

#!/usr/bin/ksh
for num in `cat file1.txt`
do
find . -name "*processed" -print | xargs gunzip -c | grep -q $num || echo "$num not found" >> outputfile.txt &
done

Please help me to fine tune this script so that I can get the output in less time.........
Thanks

Do you have room to unzip all the files somewhere?

Unzip all files and append to one big file (your havelist), then use awk to check each line of your havelist against file1.txt:

( find . -name "*processed" -print | xargs gunzip -c ) > /scratch/havelist
awk ' NR == FNR { F[i++]=$0; next}
    { for(i in F)if(index($0, F)) delete F; }
    END { for(i in F) print "number "F" not found." } ' file1.txt havelist > outputfile.txt
rm /scratch/havelist

awk may chew a big chunk of memory as it loads the 5,000,000 numbers into it's array, and you won't get any output till it's done, but it will be much quicker that your original attempt.
Some more efficency can be gained if you can say that each processed file line only matches to 1 number from file1.txt (ie a line in processed file dosn't contain 2 or more numbers you are looking for).

But, with record counts this large you should really be considering using a database rather than flat/zipped files.

---------- Post updated at 08:47 AM ---------- Previous update was at 08:15 AM ----------

If you just don't have room to extract to a scratchfile you can do the search through a pipe:

( find . -name "*processed" -print | xargs gunzip -c ) | awk ' NR == FNR { F[i++]=$0; next}
    { for(i in F)if(index($0, F)) delete F; }
    END { for(i in F) print "number "F" not found." } ' file1.txt - > outputfile.txt

Thanks Chubler for the reply...
Can you please explain what is going on inside the command.

The output file made is 11.4 GB

-rw-r--r-- 1 user group 11388572164 Jan 21 11:32 outputfile.txt

Is the error occuring because of the large size of the file.
But it's giving error after running the command..

awk ' NR == FNR { F[i++]=$0; next}
{ for(i in F)if(index($0, F)) delete F; }
END { for(i in F) print "number "F" not found." } ' file1.txt havelist > outputfile.txt

-----------------------------------------------------------------------

Syntax Error The source line is 1.
The error context is
NR == >>> <<<
awk: 0602-500 Quitting The source line is 1.

-----------------------------------------------------------------------

Please suggest

---------- Post updated at 02:13 AM ---------- Previous update was at 01:10 AM ----------

After executing it, I am getting the below error.

awk: 0602-561 There is not enough memory available now.
 The input line number is 3.47414e+06. The file is Bigfile.
 The source line number is 2.

---------- Post updated at 04:02 AM ---------- Previous update was at 02:13 AM ----------

Hi Chubler_XL,
Thanks for the quick reply.:b:
It's working but still it is taking too much time. Is it possible to search in a faster way.

With some tweaking it may still be possible to get this brute force solution to work fast enough, but it's not looking good. I suspect your are running out of physical memory and the system is swapping, how big is the file1.txt and how much physical memory do you have on your system?

You could consider retaining some of the work from previous scans. This really depends on your dataset and leads to the following questions about your data.

How static is it?
I'd assume the zip file contents don't change much, but perhaps you remove old zips and add new ones?

How about the contents of file1.txt is this completely different each night? Are any items searched for searched for again at later dates? (For example if we know the XYZ wasn't in the zips lastnight and it's searched for again, all we need to scan are files added since last night's scan).

What Operating System and version are you running? This is very important. I haven't seen the "fork" error in many years.
We gather that you have ksh.

The script posted makes no sense because it does not search for zipped files.

If "file1.txt" contains 5,000,000 numbers this level of blunt processing is absurd in unix Shell when searching 5Gb of data.
You appear to want to search a specific field at a specific position within a record but have provided sample data in "file1.txt" which does not match the exact length of the highlighted field.

Do you have use of a professional Systems Analyst?
Do you have a database engine (e.g. Oracle) and use of professional Database Programmers?

IMHO you are way out of you depth. Hire a professional.

The background "&" within a 5,000,000 iteration loop is why you are getting "fork" errors. It would take a seriously special kernel build to create a unix which could cope with 5,000,000 concurrent processes (hmm. temptied to try it). I am surprised that you did not crash the computer with this irresponsible, uninformed and ignorant code.

Ps: Given a decent commercial database engine and some top-class Database Programmers this problem is solveable.

Dear Chubler_XL,
Thanks for your involvement is solving the issue that I am facing.
The File file1.txt will be different for each day. It contains fixed length data like
87654321089
09987625347
78346347655
23489237489
.
.
.
.
73246782364
23423423444

And I have to search all this data in zipped files.
Sample Zipped file record:
--------------------------
KKKKK 1454545345 842011011920025500000001287009909427909 031378055730681 KKKKKK AAA MMMMMMM034535345345345345
.
.
.
.

KKKKK 1454545345 842011011920025500000001287009909427909 03156456456546 KKKKKK AAA MMMMMMM034535345345345345
The numbers in file1.txt appears in zipped files in a particular position say from position 60th till position 97th.

The number to be searched for can be repeated or not the very next day.

File1.txt size:
---------------
3 MB

Physical memory
--------
size inuse free pin virtual
memory 8388608 8372636 15972 2270610 5978749
pg space 20971520 319929

           work      pers     clnt         other

pin 1925282 0 0 345328
in use 5910340 0 2462296

PageSize PoolSize inuse pgsp pin virtual
s 4 KB - 7994892 319929 1989842 5601005
m 64 KB - 23609 0 17548 23609

Operating System:
-----------------
UNIX-AIX

AIX version:
------------
AIX legzone1 3 5 00C15CD44C00

---------------------------------------------------------------------------------

Dear Methyl,
Actually the file name is abc.txt.gz.processed...That's why I am searching for "*processed".We are first trying to search
it with the help of shell scripting. and if it is not possible, then We will use DataBase to search.

@vsachan
Having re-read my post I was a bit blunt yesterday. On my computer I can't unzip a file with the wrong file extension.

I have written one-off searches of large numbers of reasonably large compressed text files in Shell and realise the practical limits. On your scale this is not a job for Shell programming.

This data came from somewhere and I would be very surprised if it only ever existed as a flat file. I suppose that you might be trying to find data rejections in context?

I am very concerned that this now appears to be a fuzzy search. The length of the search string and the position and size of the searchable data varies between your posts. This makes the software design so much more difficult and I withdraw any implication that this task is feasible.

On more reflection strongly advise that you get a Systems Analyst and a Database Designer on this job with view to possibly using a database approach. To my mind it is too early to engage a Database Programmer.

I created a file1.txt with 500,000 keys and did some timing tests on my laptop (2.6GHz i3 M 350): awk takes about 1.08 secs to check a zip record against the 500K keys. I loaded the keys into a linked list in C and did a substring match an this took 0.842 secs per zip record (so with a custom C program you might expect a 15%-20% imporvement in speed).

The real issue here is the substring match of each key, if you could determine the key of a zip datafile record and do and database or B-tree lookup in would only take a few miliseconds per zip datafile record. For example if you could say the key for data record:

KKKKK 1454545345 842011011920025500000001287009909427909 03156456456546 KKKKKK AAA MMMMMMM034535345345345345

Was "56456456546" and indexed search could be done against your 500K keys and the matching key ticked off as found very quickly. Without this ability you are could really only expect to check in the order of 10 ziprecords per second.

can you plz provide the code that u used so that I can check it on the unix system.

Yesterday, I tried loading the keys into a c++ B-Tree and then pulling a key from your zip-string (11 chars from position 60) and doing a lookup. This managed to check 20,000 zip records in approx 0.7 sec. This highlights the power of indexed lookups, more work should probably be done in parsing your data records and determining the key from the record.

From the example zip record in my previous post we could lookup:

"56456456546"
"6456456546 "
"456456546  K"
"56456546 KK"
"6456546 KKK"
...
"MMMM0345353"
"MMM03453534"

Depending on the region where data appears and what limitation you have on key characters (eg are spaces or letters allowed in the keys), up to 27 lookups would be required for each record, this should still be able to process close to 1000 records per second.

As you can see it's important to find out as much about your data as possible, Ideally the processing program should look at a record and identify it's type and extract the proper key exactly and report errors on badly formatted records. This avoids false positives, ensures you are looking up what is intended and can react to format changes within the data records.

As methyl pointed out earlier getting a Systems Analyst involved is your best bet as we can only advise on information you supply here and there may be even better ways to get the result you require.

@vsachan
What Database Engine and version do you have?
Did the flat file data come from an extract from you main database(s)?
Have you now developed code to match your requirements?