Raidz displays incorrect size

So I just tried what somebody here suggested and set up my backupserver with a zraid2 over 16 disks (4x4 groups)

zpool create tank raidz2 c7t0d0 c7t1d0 and so on.

zpool list showed a correct info about 5,4T in size (I did not use all disks), but when I did a df -h, it showed me only half the size? Then I created a file with mkfile 10G test and it showed up with ls -lh as 10G. But looking at df -h again it showed merely 1.0G used? Odd...

My gosh, I just added the remaining four drives to the pool with zpool add tank raidz2 c7t12d0 c7t13d0 c7t14d0 c7t15d0

It worked, but suddenly the machine died under my fingers, so to speak! My gosh, could somebody please tell me it did not crash because of a single broken disk or something like this? At the moment it responds to ping, but no ssh access.

My god, can somebody please tell em, this is a happy little accident, nothing about zfs crashing from time to time? I have spent the last two months talking colleagues and boss into Solaris and zfs, bought hardware worth 2k+ and if it's all screwed up now, I have to look for a new job, it seems...

F***!

Well I tried an installation on a USB-stick, maybe that's the problem. It showed some strange behaviour, like it crashes when I boot off the USB-stick as a first boot-device. But when I move the cd-rom as the first boot-device and the stick as second, it boots just fine.

what hardware are you using?

also, did you deploy your root disks with zfs or ufs? i'm guessing you have a second pool that you have configured your raidz on?

can you show us the output of:
df -h
zpool list
zfs list -t filesystem

What Solaris release are you using (cat /etc/release) ?
Were these crashes kernel panics ?
Are crash dumps enabled (coreadm) ? If true, is there a crash dump to analyse (/var/crash/<hostname>/unix.x & vmcore.x) ?

While supported, booting off a USB stick is AFAIK not a usual practice. I would recommend to create instead a mirror with two of you disks and install Solaris on it.

Well, that would mean I lose two HDD bays, aka 1 Terabyte, for an installation of 3G? No good deal...

And the machine is 40 miles away, came back online (ping), but ssh doesn't work. I'll have to wait for our trainee to wake up and reboot...

The system will likely use much more than 3 GB after a while and I think you do not take into account swap space / dump area size in that number.
A large rpool is also useful as you won't be restricted in keeping snapshots and previous boot environments on line. Moreover, nothing prevents you to create other filesystems there for any purpose, the 1 TB won't be lost.

In any case, a non mirrored root pool is not a recommended practice for a production server. If you insist booting on USB sticks, use a couple of them and set up mirroring. I'm skeptical about performance and lifespan of such a solution though.

As you are concerned with disk space available for the remaining pools, activating compression will increase their capacity and possibly their performance too. Deduplication is also coming and will also improve further these figures.

Installed once more, this time on two disks, zfs mirror.

Now it silently crashes right after loading the areca drivers...

I'm gonna set it up on a single disk now and hopefully it will work as long as I will stay in this company. If it doesn't work again, I'll go back to the hardware raid.