Solaris11.4 Backup & Restore

Hello
I'm now testing Solaris 11.4 recovery and backup.

The equipment I'm using is M10-1 equipment and I'm currently testing it with LDOM configuration.

I think you did the backup
I don't know what to do with the recovery

Below is my backup script.

#!/bin/ksh
export LANG=C
zpool export spool                                                                            
zpool import spool
zpool destroy spool 
zfs destroy -r rpool@backup
zfs snapshot -r rpool@backup
#zfs destory rpool/dump@backup
#zfs destory rpool/swap@backup
zpool create -f spool c1d1
#zfs send -Rv rpool@backup | zfs receive -vF spool
zfs send -Rv rpool@backup | zfs receive -Fduv spool
#bootadm install-bootloader -P spool
zpool set bootfs=spool/ROOT/solaris-1 spool
zfs create -o volblocksize=1m -V 64g spool/dump
zfs create -o volblocksize=1m -V 64g spool/swap
zfs destroy -r rpool@backup
zpool export spool

exit 0

After you back up to a different disk, install the boot block, and then use the device that you backed up
If you want to boot, you get an error as shown below.

NOTICE: Can not read the pool label from '/virtual-devices@100/channel-devices@200/disk@4:a'
NOTICE: spa_import_rootpool: error 5
Cannot mount root on /virtual-devices@100/channel-devices@200/disk@4:a fstype zfs

panic[cpu0]/thread=20012000: vfs_mountroot: cannot mount root

Warning - stack not written to the dumpbuf
000000002000f530 genunix:vfs_mountroot+494 (20887800, 200, 20887800, 208da000, 1000000000, 208d9c00)
  %l0-3: 0000000000000001 000000002087e800 00000000121b4978 0000000020129400
  %l4-7: 0000000020129400 00000000208a8c00 00000000208d9c00 0000000000000600
000000002000f9f0 genunix:main+228 (208a4800, 20675268, 208a4988, 1000, 206741d8, 70002000)
  %l0-3: 0000000020674000 0000000020010000 000000001007eb48 000000001007e800
  %l4-7: 000000002012d400 0000000010a33838 000000002012d688 0000000000000000

Deferred dump not available because:
	deferred dump not online (state 0)
dump subsystem not initialised
rebooting...
Resetting...

I think it's because I don't think it's rpool.

Please let me know if you know what to do to boot from a backed up device

Thank you
Best Regards

I'm only vaguely familiar with Solaris 11, so I may be completely wrong. But I have some questions / comments about what you've written.

I don't see what LDOM has to do with this, save for possibly indicating that you're using SPARC virtualization for testing. It looks like your devices may be virtual from a VDS and not physical hardware. LDOMs would be a good way to test this.

It looks to me like the spool is being used as a save pool to hold the backup and that you're doing a zfs send from the root pool (rpool) to a zfs receive to the save pool (spool) except for swap and dump zvols which are being re-crated in the save pool (spool).

I'm not familiar with how you install boot blocks for ZFS.

Please show the output of zpool status so that we can hopefully see correlation between root pool (rpool) / save pool (spool) and the two listed virtual0-devices (/virtual-devices@100/channel-devices@200/disk@4:a and /virtual-devices@100/channel-devices@200/disk@4:a).

Assuming LDOM w/ VDS, I'd want to know more about how the LDOM is configured, particularly it's boot configuration / device parameters and VDS relation to rpool and spool.

--
Grant. . . .

Thank you for your response.
I was testing it alone and found a way.

First, take a snapshot of the existing rpool with zfs and back it up.

And say that the existing disk is broken and boot to the disk you backed up.

When you boot to the backup disk, you enter maintenance mode because it is bpool, not rpool.

Log in and recognize the new disk.
And create an rpool on the new disk.

And I'm backing up my current bpool to rpool again and zfs
Install the boot block.

And if you install it with that disk, the rpool comes up well.

This topic was automatically closed 10 days after the last reply. New replies are no longer allowed.