Hi Experts,
I've issue with the huge file.
My requirement is I need to search a pattern between the 155-156 position and if its match's to 31 or 36 then need to route that to a new separate files.
The main file has around 1459328 line and 2 GB in size. I tired with the below code which take around 2 hrs to execute.
while read line
do
record_type=`echo "$line" | cut -c 155-156`
if [ "$record_type" -eq 31 ] ; then
print "$line" >> ./31.txt
elif [ "$record_type" -eq 39 ] ; then
print "$line" >> ./39.txt
fi
done < LOAD.txt
Where as I modified this and used awk which is still taking more than 30 minutes but the results vary.
awk '/839I/ {print $0}' LOAD.txt > record_39.txt &
awk '/831I/ {print $0}' LOAD.txt > record_31.txt &
cat LOAD.txt | cut -c 155-156 > smp.log
grep -c '31' smp.log
1182483
wc -l record_type_31.txt
1182495 record_31.txt
I even tired this too
awk '$5 ~ 39{print $0;}' LOAD.txt
but always the $5 wont come in between 155-156 position.
Sample records.
14115726 0000000000 00000000000000000000000000000000000000000000000000000000 000 00I201
06485726 0000000000 00000000000000000000000000000000000000000000000000000000 000 805I201
18005726ABCUS0000005726 01002080000000000000000000000000000000000000000000000000370291010381009 20090218 000 I 839I201
18005726ABCUS0000005726 08009100000000000000000000000000000000000000000000000000370290173421008 20101203 000I 839I201
18005726ABCUS0000005726 00000020000000000000000000000000000000000000000000000000370282295281006 20060706 000C 831I201
18005726ABCUS0000005726 01002080000000000000000000000000000000000000000000000000370282010171003 20090216 000 I 831I201
Do we have any other way in which I can get the currect results.
Thanks
Senthil.