Need help converting df output to gigabytes

I need some help converting the disk space values in kilobytes to gigabytes. I can't use df -h because the report has to be in megabytes for some disk space tracking software the customer is using.

I have been playing around with trying to assign variables I can use outside of awk so I can do the calculations, but am not having much luck.

Can someone help me with this code?

#!/bin/sh

dat1=`date '+%m/%d/%y'`

hst=`hostname`

df -k | sed -e /^Filesystem/d | awk -v OFS=',' -v dat="$dat1,$hst" ' {print dat,$1,$6,$3,$4 } '

The output right now looks something like this.

11/15/2012 hostname /dev/sda8 / 846516 4931376
11/15/2012 hostname /dev/sda9 /home 212488 1713428
11/15/2012 hostname /dev/sda7 /usr 3855572 5774300
11/15/2012 hostname /dev/sda6 /tmp 761336 8868536
11/15/2012 hostname /dev/sda5 /var 2063256 7566616
11/15/2012 hostname /dev/sda3 /opt 6649916 12617288
11/15/2012 hostname /dev/sda1 /boot 19031 76836

I was just thinking is there any easy way I can just divide by 1024 in the awk portion of the script? It looks like there is something I need to do to force the value to be numeric because the value is always 0.

How about this:

df -Pk | awk -v OFS=, -v dat="`date +%m/%d/%Y`,`hostname -s`" '!/^Filesystem/ { print dat,$1,$6,$3/1024,$4/1024}'