Searching a word in multiple files

Hi All,

I have a issue in pulling some heavy records , I have my input file has 10,000 records which i need to compare with daily appended log files from (sep 1st 2009 to till date) . I tried to use grep fgrep and even sed , but the as time is factor for me , i cannot wait for 5 days to get the total ouput . Is there any very effective way that i can compare the word in those log files and print it.

Eg

xyz( data in file1) needs be compared with log files

1.txt
2.txt
.
.
.
5600.txt ( total 5600 files)

Plz help

How about ?

grep xyz *.txt

tyler_durden

I have infact tried with grep -w xyz *.filename but the problem here is since i have huge data to be processed its taking days to get my output. Is there any faster means of searching in unix than grep and sed . Something like pointers we use in C prog

1) Please give more clue (content of file1 ? example of content of <n>.txt file ?
2) What output do you expect ?
maybe use the "comm" command ?

sort file1>file1.sorted
while read logfile
do
echo "Records that appear in both $logfile and file1 :"
sort $logfile >$logfile.sorted
comm -12 $logfile.sorted file1.sorted
done<`ls *.txt`