awk to filter out lines containing unique values in a specified column

Hi,

I have multiple files that each contain four columns of strings:

File1:

Code:

123	abc	gfh	273
456	ddff	jfh	837
789	ghi	u4u	395

File2:

Code:

123	abc	dd	fu
456	def	457	nd
891	384	djh	783

I want to compare the strings in Column 1 of File 1 with each other file and Print in my output those lines with values in column 1 that appear in both files.

Desired Result:

Code:

123	abc	gfh	273
456	ddff	jfh	837
123	abc	dd	fu
456	def	457	nd

Can anyone help me?
Thanks!

Try this:

awk 'NR==FNR{A[$1]=$0;next};{if(A[$1]) print A[$1] RS $0}' File[12]

I was able to do it this way .. no as "slick" as a 1-line awk pilnet posted .. but still works :slight_smile:
might be easier to read, as well? for us rookies, anyway

for i in `cut -f1 file1`
do
   if grep $i file2 >/dev/null 2>&1
   then
     grep -h $i file?
   fi   
done

The following works with more than two files, and allows non-unique values in column#1

awk '
function cleanA() {for (i in A) if (!(i in B)) delete A}
NR==FNR {A[$1]=$0; next}
(FNR==1 && ++fn>1) {cleanA(); split("",B)}
($1 in A) {A[$1]=A[$1] RS $0; B[$1]}
END {cleanA(); for (i in A) print A}
' File1 File2 ...