Why does the # of blocks change for a file on a ZFS filesystem?

I created a zpool and zfs filesystem in OpenSolaris. I made two NFS mount points:

> zpool history

History for 'raidpool':
2009-01-15.17:12:48 zpool create -f raidpool raidz1 c4t1d0 c4t2d0 c4t3d0 c4t4d0 c4t5d0
2009-01-15.17:15:54 zfs create -o mountpoint=/vol01 -o sharenfs=on -o canmount=on raidpool/vol01
2009-01-15.17:20:13 zfs create -o mountpoint=/vol02 -o sharenfs=on -o canmount=on -o compression=lzjb raidpool/vol02

I did not make the mountpoints (vol01 and vol02) into volumes. I know you can set default blocksizes when you create volumes but you cannot make them exportable NFS exports.

I am assuming that vol01 and vol02 are variable blocksizes because I did not explicitly specify a blocksize. Thus, my assumption is that ZFS would use a blocksize that is the the smallest power of 2 and the smallest blocksize is 512 bytes.

I use the stat command to check the filesize, the blocksize, and the # of blocks.

I created a file that is exactly 512 bytes in size on /vol01 (the one without the LZ compression)

I do the following stat command:
stat --printf "%n %b %B %s %o\n" *

The %b is the number of blocks used.

The number of blocks changes after a few minutes after the file is created:

# stat --printf "%n %b %B %s %o\n" *
file.0 3 512 3 4096
file.512 1 512 512 4096
# stat --printf "%n %b %B %s %o\n" *
file.0 3 512 3 4096
file.512 1 512 512 4096
# stat --printf "%n %b %B %s %o\n" *
file.0 3 512 3 4096
file.512 3 512 512 4096

Why does the # of blocks change after a few minutes? And why are we using 3 blocks when the file is only 512 bytes in size (in other words, only 1 block is needed)???