Hi Guys
I am checking the treads to get the answer but i am not able to get the answer for my question.
I have two files. First file is a pattern file and the second file is the file i want to search in it. Output will be the lines from file2.
File1:
P2797f12af 44751228
P2b1204d0f 33470964
P2b1205f76 35815429
P2797f0250 8219027
File 2:
P2797ea6c0 1942611 SAN SAN
P2797f12af 44751228 SAN SAN
P2b1204d0f 33470964 SAN SAN
P2b1205f76 35815429 SAN SAN
P2797f0250 8219027 SAN SAN
Output:
P2797f12af 44751228 SAN SAN
P2b1204d0f 33470964 SAN SAN
P2b1205f76 35815429 SAN SAN
P2797f0250 8219027 SAN SAN
I am able to do this using below command:
fgrep -f file1 file2
But it is giving me an error of out or memory as my file size is more than 1 million.
I also tried splitting it:
split -l 10000 file1 file1.split.
for CHUNK in file1.split.* ; do
fgrep -f "$CHUNK" file2
done
rm file1.split.*
It is also taking a lot of time. First loop is done really quick but for the next loop to start it is taking long time. :wall:
Can you please let me know if i am doing something wrong here. Or can you please provide me any awk command to do this stuff.
You guys are great... looking forward for your reply.