Ok jlliagre,
I am so glad you haven't deserted me now that we have come to a crunch. Let me list out as much commands recalled used to carry out the backup:
# zfs list -f rpool
# zfs snapshot -r rpool@001
# zpool create -f -R backups c0t0d0s2 (USB)
# zfs create -f backups/usbdrive
# zpool import -R backups
# zfs list -r backups
# zpool status -R -v backups
# zfs send -R -p rpool > /mnt/usbdrive/servername_sr02_rpool@001.snapshot1
Below is the zpool output of an alternative multi-boot system (servername_s1) that I have ended up booting into on the same box, the only one I can boot up successfully into yet couldn't work out what the root password is still:
-bash-3.2$ hostname
servername_sr02
-bash-3.2$ uname -a
SunOS servername_sr02 5.10 Generic_147441-09 i86pc i386 i86pc
-bash-3.2$ df -h
Filesystem size used avail capacity Mounted on
rpool/ROOT/servername_srs_1 228G 6.4G 204G 4% /
/devices 0K 0K 0K 0% /devices
ctfs 0K 0K 0K 0% /system/contract
proc 0K 0K 0K 0% /proc
mnttab 0K 0K 0K 0% /etc/mnttab
swap 8.2G 1.0M 8.2G 1% /etc/svc/volatile
objfs 0K 0K 0K 0% /system/object
sharefs 0K 0K 0K 0% /etc/dfs/sharetab
rpool/ROOT/servername_srs_1/var 228G 1.6G 204G 1% /var
swap 8.2G 1.9M 8.2G 1% /tmp
swap 8.2G 376K 8.2G 1% /var/run
rpool/export 228G 32K 204G 1% /export
rpool/export/home 228G 4.6M 204G 1% /export/home
rpool 228G 49K 204G 1% /rpool
/vol/dev/dsk/c0t0d0/sol_10_811_x86
-bash-3.2$ /sbin/zpool status -v rpool
pool: rpool
state: ONLINE
scan: resilvered 9.59G in 0h12m with 0 errors on Wed Mar 14 12:09:12 2012
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
c1t1d0s0 ONLINE 0 0 0
c1t0d0s0 ONLINE 0 0 0
errors: No known data errors
-bash-3.2$ ./zfs list -r rpool
NAME USED AVAIL REFER MOUNTPOINT
rpool 24.4G 204G 49K /rpool
rpool@001 30K - 49K -
rpool/ROOT 10.7G 204G 31K legacy
rpool/ROOT@001 0 - 31K -
rpool/ROOT/10 14.1M 204G 4.37G /tmp/.liveupgrade.278.554/lu_zone_update.554
rpool/ROOT/10@001 299K - 4.37G -
rpool/ROOT/10/var 6.50M 204G 1011M /tmp/.liveupgrade.278.554/lu_zone_update.554/var
rpool/ROOT/10/var@001 1K - 1011M -
rpool/ROOT/base_srs_install 38.2M 204G 4.48G /tmp/.liveupgrade.278.554/lu_zone_update.554
rpool/ROOT/base_srs_install@001 299K - 4.48G -
rpool/ROOT/base_srs_install/var 14.4M 204G 1.47G /tmp/.liveupgrade.278.554/lu_zone_update.554/var
rpool/ROOT/base_srs_install/var@001 1K - 1.47G -
rpool/ROOT/servername_srs_1 189M 204G 6.37G /
rpool/ROOT/servername_srs_1@001 3.29M - 6.37G -
rpool/ROOT/servername_srs_1/var 112M 204G 1.56G /var
rpool/ROOT/servername_srs_1/var@001 10.5M - 1.48G -
rpool/ROOT/servername_srs_2 10.5G 204G 6.47G /tmp/.liveupgrade.278.554/lu_zone_update.554
rpool/ROOT/servername_srs_2@base_srs_install 75.0M - 4.37G -
rpool/ROOT/servername_srs_2@servername_srs_1 71.3M - 4.47G -
rpool/ROOT/servername_srs_2@servername_srs_2 67.2M - 6.37G -
rpool/ROOT/servername_srs_2@001 13.2M - 6.38G -
rpool/ROOT/servername_srs_2/var 3.74G 204G 3.33G /tmp/.liveupgrade.278.554/lu_zone_update.554/var
rpool/ROOT/servername_srs_2/var@base_srs_install 23.8M - 1011M -
rpool/ROOT/servername_srs_2/var@servername_srs_1 14.8M - 1.46G -
rpool/ROOT/servername_srs_2/var@servername_srs_2 34.7M - 1.48G -
rpool/ROOT/servername_srs_2/var@001 3.36M - 3.33G -
rpool/dump 1.00G 204G 1.00G -
rpool/dump@001 16K - 1.00G -
rpool/export 4.83M 204G 32K /export
rpool/export@001 0 - 32K -
rpool/export/home 4.80M 204G 4.61M /export/home
rpool/export/home@001 192K - 4.61M -
rpool/swap 12.7G 213G 4.16G -
rpool/swap@001 0 - 4.16G -
-bash-3.2$ ls -lt /export
total 3
drwxr-xr-x 3 root root 3 Nov 16 00:08 home
-bash-3.2$ cd /export
-bash-3.2$ ls
home
-bash-3.2$ ls -lt
total 3
drwxr-xr-x 3 root root 3 Nov 16 00:08 home
-bash-3.2$ cd home
-bash-3.2$ ls -lt
total 3
drwxr-xr-x 15 support sys 23 Nov 16 19:02 support
-bash-3.2$ cd support
-bash-3.2$ ls -lt
total 6
drwxr-xr-x 2 support sys 3 Mar 10 2012 Desktop
drwxr-xr-x 2 support sys 2 Mar 10 2012 Documents
I have taken the following action in an attempt to resolve the mount issue of � /export' by removing home under /export mount point which is preventing �/export' from mounting for servername_sr02:
- Boot up in single user mode & login as root
- Unmount /export/home
- Check the /export directory (mount point)
- Remove the /export/home directory
- Reboot but found another set of similar error as follows:
cannot mount �/export': directory is not empty
svc:/system/filesystem/local:default:WARNING:/usr/sbin/zfs mount -a failed: exit status 1
Nov 15 10:50:17 svc.startd[10]:svc:/system/filesystem/local: default: Method �/lib/svc/method/fs-local� failed with exit status 95.
svc.startd[10]: system/filesystem/local: default failed fatally: transitioned to maintenance
ERROR: svc:/system/filesystem/minimal: default faild to mount /var/run (see �svc -x' for details)
Nov 18 00:44:06 svc.startd[10]:svc:/system/filesystem/minimal:default: Method /lib/svc/method/fs-minimal � failed with exit status 95.
Nov 18 00:44:06 svc.startd[10]:svc:/system/filesystem/minimal:default: failed fatally transitioned to maintenance...
Requesting System Maintenance Mode
Console login service(s) cannot run
Root password for system maintenance (Control-d to bypass)
( 1 ) Why am I encountering yet another svc error when booting into servername_sr02? I am still not clear whether the same rpool is used by both of the following boot up filesystems/partitions in Grub:
title servername_srs_1
findroot (BE_servername_srs_1,0,a)
bootfs rpool/ROOT/servername_srs_1
kernel$ /platform/i86pc/multiboot -B $ZFS-BOOTFS
module /platform/i86pc/boot_archive
title servername_srs_1 failsafe
findroot (BE_servername_srs_1,0,a)
bootfs rpool/ROOT/servername_srs_1
kernel /boot/multiboot -s
module /boot/amd64/x86.miniroot-safe
title servername_srs_2
findroot (BE_servername_srs_2,0,a)
bootfs rpool/ROOT/servername_srs_2
kernel$ /platform/i86pc/multiboot -B $ZFS-BOOTFS
module /platform/i86pc/boot_archive
title servername_srs_2 failsafe
findroot (BE_servername_srs_2,0,a)
bootfs rpool/ROOT/servername_srs_2
kernel /boot/multiboot -s
module /boot/amd64/x86.miniroot-safe
( 2 ) Can you confirm whether I am still using the same rpool on both boot up filesystems? They appear to have a different ROOT �/' filesystem and hence explains why I am not able to su in as root with the original password on the failed servername_srs_2.
( 3 ) I wouldn't mind staying in the current servername_srs_1 boot up filesystem provided that I can hack in as root.
( 4 ) Also can longer boot up from Solairs 10 x86 DVD installation disk after having inadvertently reset boot up firmware. Now the system would go ........................... and not go any further.
Hope this update hasn't confused you completely.
As always, thank you so much for stick around,
George