ZFS LDOM problem on Solaris 10

Apologies if this is the wrong forum..

I have some LDOMs running on a Sparc server. I copied the disk0 file from one chassis over to another, stopped the ldom on the source system and started it on the 2nd one. All fine. Shut it down and flipped back. We then did a fair bit of work on the source system and I wanted to copy the file over again and this time make the copy final and destroy the domain on the source chassis.

The method I had used was to take a snapshtop on systemA.
cd into the .zfs/snapshot area and then scp the file to the destination machine and overwrite the file that was there from that initial copy of the ldom. It all seemed to work, but the destination machine didn't have any of the changes that had been made?

I started all over and this time did a sum of the source and destination files after the scp phase. They were the same. Again when the destination machine was brought up, it didn't have any of the changes.

This time, I edited some files on the destination chassis but then copied it all across again. On the source side, the snapshot name was changed. Again the checksums were compared, but this time, when the destination machine came up, although it didn't have all the changes from source, it did have the few changes I had made on the destination side. So it looks very much like somehow, the ldom is keping hold of the disk (file) . Is this possible? I don't get any errors when I rm the disk0, or overwrite it or anything? Do I need to destroy the whole LDOM to release the file that is the disk?

Any help appreciated and I hope my post makes sense.

Have you tried stopping the LDOM on source machine and using zfs send / receive to destination machine ?

Regards
Peasant.

Not yet. Can you send just one file of a pool? Apologies but I've not had any training yet.

It feels like the RX side because it remembered stuff that never was on the source side?

Worth a go though - thank you.

You can send specified filesystem/volume. BTW I'm curious.. What is the disk performance inside your LDOM?

I stopped the source LDOM and took a snapshot only then. Copied the file across and the problem described persisted. I had become convinced the issue was on the destination side, but the idea brought me closer to the idea of releasing the old file in some way.

My problem is now solved. I had to unbind the ldom and then bind again. As soon as this was done, it forgot about the old system and flipped to the updated file. I guess the VDS was keeping hold of the old disk0. I suppose it's like other bits of Solaris when something is keeping the inode.

Thanks very much for the ideas - appreciated.

The peak I/O was about 45M per 5 seconds as seen by

zpool iostat bigpool 5 720

This was normal running PLUS the scp activity at the same time.