Extract Lines Containg a Keyword

Hi ,

I have two files, say KEY_FILE and the MAIN_FILE. I am trying to read the KEY_FILE which has only one column and look for this column data in the MAIN_FILE to extract all the rows that have this key.

I have written a script to do so, but somehow it is not returning all the rows (
It gives the data for last row in KEY_FILE only.
). I feel there is something wrong with the loop. Here is the script

Please see what i am missing here & do advise a better way if possible. Thanks.

Pls post sample input and expected output.

if "-f" option is not in grep ..

$ xargs < key.txt | sed 's, ,|,g;s,^,egrep -i ",g;s,$,\" main.txt,g'| sh

@rangarasan : Here are the sample files

The keyword is numeric. Lines in Main File are alphanumeric and contain spaces as well.

I guess i am getting the result corectly, but the problem could be with the output being overwritten by grep line everytime. Is there a way to append every line being produced by the grep command ?

Hi,

Please use >> direction instead of >

> - remove existing data
>> - append with existing data

#!/bin/ksh
KEY_FILE=KeyFile.txt
MAIN_FILE=MainFile.txt
OUTPUT_FILE=Output.txt
while read inputline
do
column=`echo "$inputline" | awk '{print $0}'`
grep "$column" $MAIN_FILE >> $OUTPUT_FILE
done < $KEY_FILE 

cheers,
Ranga:)