Removing leading zeros for a decimal column

removing leading zeros for a decimal column in a file which has string & decimal values

,,,,,6630140,XXXXXXXXXXXXXXX, 0020.00,USA
,,,,,6630150,XXXXXXXXXXXXXXXL (xyz, 0010.00,USA
,,,,,6630150,XXXXXXXXXXXXXXX(xyz), 1300.00,USA

My file contains 9 columns. Out 9 columns, 8th column contains the decimal/integer values.

In my requirement, i just need to remove the leading zero from the integer column. say for example

0020.00 should be converted into 20.00
0010.00 should be converted into 10.00
1300.00 should be the same 1300.00

I tried the below command in unix. But, it didn't work for my requirement.

sed 's/^[0].*//' sample.txt

It works well for 0020 or 0010. The above command converts 0020 to 20 and 0010 to 10. But, when there is a fraction in the number, the above one is not working.

Any suggestions?

Thanks,
N

nawk -F, '$8=sprintf("%.2f", $8)' OFS=, myFile

Using sed:

sed -e 's/, *0*\./,0./' -e 's/, *0.*\([1-9]\)/,\1/' inp_file

If i use this code

sed -e 's/, *0*\./,0./' -e 's/, *0.*\([1-9]\)/,\1/' sample.txt
00125
0245

If i use this code

nawk -F, '$8=sprintf("%.2f", $8)' OFS=, sample.txt
00125,,,,,,,0.00
0245,,,,,,,0.00
0000546.0215,,,,,,,0.00

Not working as expected

bash-2.05$ cat mar.txt
,,,,,6630140,XXXXXXXXXXXXXXX, 0020.00,USA
,,,,,6630150,XXXXXXXXXXXXXXXL (xyz, 0010.00,USA
,,,,,6630150,XXXXXXXXXXXXXXX(xyz), 1300.00,USA
bash-2.05$
bash-2.05$
bash-2.05$ nawk -F, '$8=sprintf("%.2f", $8)' OFS=, mar.txt
,,,,,6630140,XXXXXXXXXXXXXXX,20.00,USA
,,,,,6630150,XXXXXXXXXXXXXXXL (xyz,10.00,USA
,,,,,6630150,XXXXXXXXXXXXXXX(xyz),1300.00,USA
1 Like

treat you data like this:

 echo "0020.00
0000546.0215" |awk '{print $1*1 FS $2}' FS=.
20.00
546.0215

When I am trying it is not working the error it is giving is

$ nawk -F, '$8=sprintf("%.2f", $8)' OFS=, teat.txt
nawk: can't open file teat.txt
 source line number 1
$

Does this way help you?

echo ",,,,,6630140,XXXXXXXXXXXXXXX, 0020.00,USA
,,,,,6630150,XXXXXXXXXXXXXXXL (xyz, 0010.00,USA
,,,,,6630150,XXXXXXXXXXXXXXX(xyz), 1300.00,USA" |awk -v FS=. '{x=substr($1,1,match($1,", ")+1);y=substr($1,match($1,", ")+2)*1;print x y FS $2}'
,,,,,6630140,XXXXXXXXXXXXXXX, 20.00,USA
,,,,,6630150,XXXXXXXXXXXXXXXL (xyz, 10.00,USA
,,,,,6630150,XXXXXXXXXXXXXXX(xyz), 1300.00,USA

marpadga18,

Maybe trying with:

$ echo "0035.21
1000097.88000100
0002648.000012
0000546.0215" | sed 's/\(0*\)\(.*\)/\2/'
35.21
1000097.88000100
2648.000012
546.0215

and for your sample data

$ echo ",,,,,6630140,XXXXXXXXXXXXXXX, 0020.00,USA
,,,,,6630150,XXXXXXXXXXXXXXXL (xyz, 0010.00,USA
,,,,,6630150,XXXXXXXXXXXXXXX(xyz), 1300.00,USA
,,,,,6630150,XXXXXXXXXXXXXXXL (xyz, 0002648.000012" | sed 's/\(.*, \)\(0*\)\(.*\)/\1\3/'

,,,,,6630140,XXXXXXXXXXXXXXX, 20.00,USA
,,,,,6630150,XXXXXXXXXXXXXXXL (xyz, 10.00,USA
,,,,,6630150,XXXXXXXXXXXXXXX(xyz), 1300.00,USA
,,,,,6630150,XXXXXXXXXXXXXXXL (xyz, 2648.000012

Hope be what you need.

 $ ruby -F"," -lane '$F[7]=format("%.2f",$F[7]); print $F.join(",")' file 
,,,,,6630140,XXXXXXXXXXXXXXX,20.00,USA 
,,,,,6630150,XXXXXXXXXXXXXXXL (xyz,10.00,USA 
,,,,,6630150,XXXXXXXXXXXXXXX(xyz),1300.00,USA