blank spaces getting deleted

I have to filter out data from a file based on the value of first three characters of each record I have used the following logic

FIN=$LOC/TEST2.TXT

FEEDFILE=$LOC/TEST1.TXT

while read FDROW
do

FEEDROW=$FDROW;

DTYPE=`echo $FEEDROW |cut -c 1-3`
if [ $DTYPE -eq 300 ] ; then

echo $FEEDROW >> $FIN
fi
done < $FEEDFILE

However there is one problem thats occuring in this caseRecords are shrinking as while reading the file when the data (record) is getting stored in $FDROW variable blank spaces within record are being truncated and this is creating problems as the base file/output file, that I am using is a flat file and hence field positions are getting disturbed in the output file TEST2.txt, is there a way to prevent this or is there any other way this can be done?

Try to modify the Internal Field Separator :

FIN=$LOC/TEST2.TXT
FEEDFILE=$LOC/TEST1.TXT

while IFS= read FDROW
do
   FEEDROW=$FDROW;
   DTYPE=`echo $FEEDROW |cut -c 1-3`
   if [ $DTYPE -eq 300 ] ; then
      echo $FEEDROW >> $FIN
   fi
done < $FEEDFILE

Jean-Pierre.

awk ' /^300/ ' $FEEDFILE > $FIN

Tried using IFS but also it's not working

Add quotes around $FEEDROW in the echo statement;

FIN=$LOC/TEST2.TXT
FEEDFILE=$LOC/TEST1.TXT

while IFS= read FDROW
do
   FEEDROW=$FDROW;
   DTYPE=`echo $FEEDROW |cut -c 1-3`
   if [ $DTYPE -eq 300 ] ; then
      echo "$FEEDROW" >> $FIN
   fi
done < $FEEDFILE

So, i think that aubu23 solution is the best way to do the work.

Jean-Pierre.

Thanks the use of qoutes has done the trick and also thanks to ABU that thing is also working