to read two files, search for patterns and store the output in third file

hello
i have two files
temp.txt
and temp_unique.text

the second file consists the unique fields from the temp.txt file

the strings stored are in the following form

4,4
17,12
15,65
4,4
14,41
15,65
65,89
1254,1298

i'm able to run the following script to get the total count of a particular string in the file with the command

grep -cw '4,4'  temp.txt 

But the problem is that i want to execute this on temp.txt for each value in temp_unique.txt and store the output in a third file in the form of
String \t count

How can i deal with this ? :confused:

thanks in advance :slight_smile:

Try this,

 awk 'NR==FNR{a[$0]++;next} a[$0]{b[$0]=b[$0]+1} END {for(j in b) {print "count of ",j,":",b[j]}}' temp_uniq.txt temp.txt
1 Like

Or try this..

#!/bin/ksh
while read line
do
	grep -cw "$line" temp.txt >> outfile.txt 2>error.file

done < temp_unique.txt

Great man
thanks a lot pravin and michaelrozar17
its pretty fast and smarter