imas
October 12, 2008, 1:06pm
1
Hi,
I have file 1.txt with following entries as shown:
0152364|134444|10.20.30.40|015236433
0233654|122555|10.20.30.50|023365433
**
**
**
In file 2.txt I have the following entries as shown:
0152364|134444|10.20.30.40|015236433
0233654|122555|10.20.30.50|023365433
0789456|332211|10.20.30.40|078945633
1234567|225522|10.20.30.50|123456733
0321654|999999|10.20.30.40|032165433
0456123|777899|10.20.30.40|045612333
***
***
***
I want take the IP Address column from 1.txt and then search it in 2.txt and comment the duplicate entry from second matched pattern.
i.e., i need to have the following output in 2.txt as shown:
0152364|134444|10.20.30.40|015236433
0233654|122555|10.20.30.50|023365433
#0789456|332211|10.20.30.40|078945633
#1234567|225522|10.20.30.50|123456733
#0321654|999999|10.20.30.40|032165433
#0456123|777899|10.20.30.40|045612333
***
***
***
Below is the script i wrote but it will comment all the duplicate entries:
for i in `cat 1.txt |cut -d"|" -f3`
do
cat 2.txt |sed "s/^\(.*\|$i\|.*)/#/g" > tmp.txt
mv tmp.txt 2.txt
done
Can some one guide me how to leave the first duplicate entry untouched and comment/delete from second duplicate entry and so on.
Please do not close this thread if you do not know the answer.
Thanks
-Imas
Is this a home work question?
Assuming that it's not, what is the real world problem of your question?
Regards
imas
October 13, 2008, 12:00am
3
Yes your assumption is correct and this is a real world problem
What iam trying to do is commenting the duplicate Ip's found in the text file ignoring the first occurance as to be valid IP for a valid mac and from the second occurance onwards commenting the duplicate IPs at the starting of the line.
file 1.txt with following entries as shown:
0152364|134444|10.20.30.40|015236433
0233654|122555|10.20.30.50|023365433
**
**
**
file 2.txt with following entries as shown:
0152364|134444|10.20.30.40|015236433
0233654|122555|10.20.30.50|023365433
0789456|332211|10.20.30.40|078945633
1234567|225522|10.20.30.50|123456733
0321654|999999|10.20.30.40|032165433
0456123|777899|10.20.30.40|045612333
***
***
***
i need to have the following output in 2.txt as shown:
0152364|134444|10.20.30.40|015236433
0233654|122555|10.20.30.50|023365433
#0789456|332211|10.20.30.40|078945633
#1234567|225522|10.20.30.50|123456733
#0321654|999999|10.20.30.40|032165433
#0456123|777899|10.20.30.40|045612333
***
***
***
The correct script i wrote is
for i in `cat 1.txt |cut -d"|" -f3`
do
cat 2.txt |sed "s/^\(.*\|$i\|.*)/#\1/g" > tmp.txt
mv tmp.txt 2.txt
done
But the above script will comment all the occurance.
Your help is appreciated.
Thanks
-imas
awk:
nawk 'BEGIN{FS="|"}
{
if(NR==FNR)
a[$3]=0
else
{
a[$3]++
if(a[$3]>=2)
print "#"$0
else
print $0
}
}' 1.txt 2.txt
imas
October 13, 2008, 2:37am
5
Hi Summer_Cherry,
Million thanks for giving me the piece of code which will comment starting from second occurance of matched pattern from the starting of line.
intelsol2>cat 1.txt
0152364|134444|10.20.30.40|015236433
0233654|122555|10.20.30.50|023365433
intelsol2>cat 2.txt
0152364|134444|10.20.30.40|015236433
0233654|122555|10.20.30.50|023365433
0789456|332211|10.20.30.40|078945633
1234567|225522|10.20.30.50|123456733
0321654|999999|10.20.30.40|032165433
0456123|777899|10.20.30.40|045612333
intelsol2>
nawk 'BEGIN{FS="|"}
{
if (NR==FNR)
a[$3]=0
else
{
a[$3]++
if (a[$3]>=2)
print "#"$0
else
print $0
}
}' 1.txt 2.txt >3.txt
intelsol2>cat 3.txt
0152364|134444|10.20.30.40|015236433
0233654|122555|10.20.30.50|023365433
#0789456|332211|10.20.30.40|078945633
#1234567|225522|10.20.30.50|123456733
#0321654|999999|10.20.30.40|032165433
#0456123|777899|10.20.30.40|045612333
intelsol2>
Also thanks a lot for unix forum guys "you ppl rock!!!"
FYI, i was trying to search for this code since all the weekend and went through sed manual "Sed - An Introduction and Tutorial " sub topic (/1, /2, etc. Specifying which occurrence) however was unlucky in trying all the trial and error method.
Also Summer could you please explain me the code you have written so that i can understand it in a better way.
Again thanks a lot and you can close this thread.
Thanks
-imas