Matching 10 Million file records with 10 Million in other file

Dear All,

I have two files both containing 10 Million records each separated by comma(csv fmt).
One file is input.txt other is status.txt.

Input.txt-> contains fields with one unique id field (primary key we can say)
Status.txt -> contains two fields only:1. unique id and 2. status

problem: match id from input.txt to id from status.txt and update/log the status accordingly in output file.

requirement: need efficient algo for getting the solution in minimal time. tried perl, but system hangs during processing. Pls suggest if there's a workable way to do the same. Is it doable in perl or c/c++/java ?

Thanks.

What Operating System and version are you running?

How big are the files?
Are either or both of the files in sorted order?
Do the records in each file match one-for-one?
Does the output order matter?

Can you post sample input and matching output?

Is this an extract from a database where it might be easier to work on the data while it is still in the database?

Additional questions:

Do you need to run this match frequently or is it a once-off job?
How frequently are the data files updated?
Are there just new records appended to the files or are they completly re-written?

The OS is linux, it's a one time job(occasionally). these are offline files and not being updated. Need to make a process for future requirements.

Its not in DB.. actually these are application log files.
The size of files are 1.5G approx. Right now only thinking of the best way/approach to complete the task...
Had tried using perl hashes(didn't work), i guess keeping that much data in memory is not possible... hence algorithm has to be really efficient here.:slight_smile:

Sample files:
Input.txt

20.04.2012 11.08.44;RECV;APPNAME@HOSTNAME06:11496059192;processed;Location;contact;status;email_id;2
20.04.2012 11.08.44;RECV;APPNAME@HOSTNAME06:11496059168;processed;Location;contact;status;email_id;1
20.04.2012 11.08.44;RECV;APPNAME@HOSTNAME06:11496059220;processed;Location;contact;status;email_id;2

Status.txt

APPNAME@HOSTNAME06:11496059192;SUCCESS
APPNAME@HOSTNAME06:11496059224;SUCCESS
APPNAME@HOSTNAME06:11496059168;FAILURE
APPNAME@HOSTNAME06:11496059220;FAILURE
APPNAME@HOSTNAME06:11496059193;SUCCESS

need to update the status field in input.txt with the status(success/failure) in status.txt

Any comment about the order of the data in the files and whether there is a one-for-one match between the two files (in which case the paste command might be suitable?

Edit: Posts crossed. I can see that neither file is in any particular order and that your sample does not show a one-for-one match.

It's going to be necessary to sort both files. Does the order of the final output data matter?

There is one-for-one match and its not necessary that the id from input.txt is always found in status.txt, if its found then there is only one match.
No the order doesn't matter here.

Are these :11496059224; numbers unique identifiers? Or can there be two or more lines with the same number? If they're unique identifiers, I think you could try with this thing.

First of all, if status.txt is too big, let's split it in many "tiny" files:

split -l 100000 status.txt tinyfile

Then, here we go:

IFS=";:"
declare -a status
for file in tinyfile*; do
  while read -r x y z; do
     status[$y]=$z
     done < $file
  while read a b c d e f g h i l; do
     h=${status[$d]}
     [[ $h = "" ]] || printf '%s;%s;%s:%s;%s;%s;%s;%s;%s;%s\n' "$a" "$b" "$c" "$d" "$e" "$f" "$g" "$h" "$i" "$l"
     done  < input.txt >> output.txt
  unset status
  done

I haven't tried it, so I don't know how fast or slow it can be.

Here is a little bash script to generate 10million test records (for those wanting to test performance of their solutions):

make_test.bash

for((i=1;i<30000000;i++)) {
   printf -v id "%s%04d%04d%03d" $i $((RANDOM%10000)) $((RANDOM%10000)) $((RANDOM%1000))
   [ $RANDOM -lt 14000 ] && echo "20.04.2012 11.08.44;RECV;APPNAME@HOSTNAME06:$id;processed;Location;contact;status;email_id;2" >&2
   [ $RANDOM -lt 16384 ] && echo "APPNAME@HOSTNAME06:$id;SUCCESS" || echo "APPNAME@HOSTNAME06:$id;FAILURE"
}

Call it like this ./make_test.bash > Status.txt 2> Input.txt

Here is my solution:

awk -F'[:;]' '
NR==FNR{S[$2]=($3=="SUCCESS"); next}
{ if ($4 in S)
    print $1";"$2";"$3":"$4";"$5";"$6";"$7";"(S[$4]?"SUCCESS":"FAILURE")";"$8";"$9
  else
    print $0
}' Status.txt Input.txt > Result.txt

Will update this post when I know the runtime :-
Stack dump at record 20,127,745 (of 27,713,184) while reading Status.txt after 1m36s runtime

Seems like it's just too much data for my PC. Good news is it didn't seem to take very long to read in the data and there is hope for larger (64bit) servers.

Well, I've tried my solution using just one big status file (splitting it in many files is actually a very bad idea that would lead to extremely long run times - days at least). So I used:

#!/bin/bash
IFS=";:"
for file in status.txt; do
   declare -a status
   while read -r x y z; do
     status[$y]=$z
     done < $file
  while read a b c d e f g h i l; do
     [[ ${status[$d]} = "" ]] || h=${status[$d]}
     printf '%s;%s;%s:%s;%s;%s;%s;%s;%s;%s\n' "$a" "$b" "$c" "$d" "$e" "$f" "$g" "$h" "$i" "$l" >> output.txt
     done  < input.txt
  unset status
  done
exit 0

It took about 2.7GB of ram to load the status array (so about two times the file size), the files were on a slow green disk and the CPU was clocked at 1.6GHz.

The script apparently works, but it's slow: 45 minutes.