Cannot use filesystem while sending a snapshot

I've got a Solaris 11 Express installed on my machine. I have created a raidz2 zpool named shares and a simple one-disc zpool named backup. I have made a script that would send a daily snapshot of shares to backup.

I use these commands

zfs snapshot shares@DDMMRRRRHHMM
zfs send -i shares@.... shares@DDMMRRRRHHMM | zfs receive -F backup

It works quite fine, but for one important issue. If i make a change to the shares filesystem while sending the snapshot, the snapshot is not send properly.

I currently unmount the filesystem before and mount it back again after the transfer completes, but that is unfortunatelly an unacceptable situation.

Thanks in advance for all replys,
Dusan

What if you remount it read-only? It will still be available but shouldn't cause trouble for the snapshot.

Thanks for your reply. There is an SVN repository on the filesystem that I might need to work with, so I'd prefer letting in mounted read/write if that's possible.

Please elaborate about:

Thanks for your reply.

Theoretically, there should be no reason for the snapshot beeing sent to get corrupted. Even though the represented filesystem has been modified, the snapshot still does have all the information about what it looked like before.

Moreover, been unable to modify the system while sending the snapshot (which might be for quite a long time if there are a lot of changes) is quite annoying. That's why I want to find out if there is a solution to that problem.

It would help if you describe what the problem is in the first place, i.e. what makes you conclude the snapshot sent is corrupted.

I used 400 GB of space on the shares filesystem, made an initial snapshot and sent it to the backup disk. Then I observed free space decreasing on the backup filesystem and thought myself "alright, that seems ok - the snapshot being send should probably take 400 GB of the backup filesystem's space after being sent completely".

But then I made a change to the svn repository (which resides on shares) and (even though the complete snapshot could not have been transfered completely at that time) after that, the "zfs list backup" command showed only a tiny space consumed on backup.

Moreover, when I tried to send another snapshot incrementaly (e.g. with the -i option), I got an error telling me that the opening snapshot did not match the most recent snapshot on the backup fs.

Though limited by my english language skills, I did my best to describe the situation. I hope this helps. Dusan

There is no evidence any snapshot is corrupted from your description.
Can is you post a detailed list of commands that would reproduce the issue ?

Thank you for helping me.

What made me believe the snapshot did not transfer correctly to the backup disk was the fact that there was not enough space used on the backup fs after the transfer than what would be needed to host the snapshot (e.g. 400GB).

I tried to repeat the error (e.g. I created and sent a snapshot and then made a commit to the svn repository) but now the transfer continues and it seems like it's going to succeed.

What could have produced the error before, I wonder, by that's probably like crystal gazing.

Dusan

ZFS doesn't do things halfway. Either a zfs send completes with no errors or it fails and no remote snapshot is created. Perhaps was there a network incident that broke the transfer and you failed to notice the error message.

That's it, probably. Thanks for your help.