How to backup ZFS filesystems to files on USB drive?

The -R option here is dubious. Its only effect is to have the backups pool to be mounted on /mnt instead of the default location /backups. That might lead to confusion as /mnt is for traditional temporary mounts.

Granted. This is surprising as one should expect the first drive on the first controller to be a internal disk but why not.

It looks like you created a pool on a device that was containing an already mounted file system. This usually leads to disasters.

Not sure about what you mean. ttyb is for serial console access, not ssh. In single user mode, the ssh service is disabled.

Back to the commands you entered:

You created:

  • a recursive snapshot of the root pool named 001
  • a zpool on your USB disk while it might already be used by something else.
  • a file system named usbdrive on this pool

Then, you imported this pool.

Can you explain why as it should have been already imported ?

What did the "zfs list" and "zpool status" commands report ?

The "zfs send" command syntax is incorrect, you cannot send a file system but a snapshot and the -r option is not supported here so I'm assuming you used this command:

zfs send -R -p rpool/001 > /mnt/usbdrive/servername_sr02_rpool@001.snapshot1
  • Did you check the /mnt/usbdrive/servername_sr02_rpool@001.snapshot1 file was created ? What was its size ?

  • What did you do and what happened after that ?

1 Like

Hi jlliagre,

The following backup command took days but still did not complete:

zfs send -R -p rpool@001 > /mnt/usbdrive/servername_sr02_rpool@001.snapshot1

I had to reset the server and found it to be around 16GB. It is being looked into by another person so I will let you know of the outcome.

Thanks,

George

---------- Post updated at 07:36 PM ---------- Previous update was at 06:15 PM ----------

Hi jlliagre,

I have successfully booted up the same servername_sr02_1 from another disk in BIOS thanks to my lucky start. As a result, I am back to square 1 and would like to do a zpool backup by creating a single zfs dump file so can you please detail the instruction on what syntax, orders switches.... Below is disk format listing in the meantime:

 
root@servernamesr02 # uname -a
SunOS servernamesr02 5.10 Generic_147441-09 i86pc i386 i86pc
root@servernamesr02 # df -h
Filesystem             size   used  avail capacity  Mounted on rpool/ROOT/servername_srs_2
                       228G   6.5G   204G     4%    /
/devices                 0K     0K     0K     0%    /devices
ctfs                     0K     0K     0K     0%    /system/contract
proc                     0K     0K     0K     0%    /proc
mnttab                   0K     0K     0K     0%    /etc/mnttab
swap                   8.6G   1.0M   8.6G     1%    /etc/svc/volatile
objfs                    0K     0K     0K     0%    /system/object
sharefs                  0K     0K     0K     0%    /etc/dfs/sharetab
/usr/lib/libc/libc_hwcap2.so.1
                       210G   6.5G   204G     4%    /lib/libc.so.1
fd                       0K     0K     0K     0%    /dev/fd
rpool/ROOT/servername_srs_2/var
                       228G   3.3G   204G     2%    /var
swap                   8.6G   1.6M   8.6G     1%    /tmp
swap                   8.6G   424K   8.6G     1%    /var/run
rpool/export           228G    32K   204G     1%    /export
rpool/export/home      228G   4.6M   204G     1%    /export/home
rpool                  228G    49K   204G     1%    /rpool
root@servernamesr02 # zpool status -v rpool
  pool: rpool
 state: ONLINE
 scan: resilvered 9.59G in 0h12m with 0 errors on Wed Mar 14 12:09:12 2012
config:
        NAME          STATE     READ WRITE CKSUM
        rpool         ONLINE       0     0     0
          mirror-0    ONLINE       0     0     0
            c1t1d0s0  ONLINE       0     0     0
            c1t0d0s0  ONLINE       0     0     0
errors: No known data errors
root@servernamesr02 # zfs list -r rpool
NAME                                            USED  AVAIL  REFER  MOUNTPOINT
rpool                                          24.6G   204G    49K  /rpool
rpool@001                                        30K      -    49K  -
rpool/ROOT                                     11.0G   204G    31K  legacy
rpool/ROOT@001                                     0      -    31K  -
rpool/ROOT/10                                  14.1M   204G  4.37G  /tmp/.liveupgrade.442.548/lu_zone_update.548
rpool/ROOT/10@001                               299K      -  4.37G  -
rpool/ROOT/10/var                              6.50M   204G  1011M  /tmp/.liveupgrade.442.548/lu_zone_update.548/var
rpool/ROOT/10/var@001                             1K      -  1011M  -
rpool/ROOT/base_srs_install                    38.2M   204G  4.48G  /tmp/.liveupgrade.442.548/lu_zone_update.548
rpool/ROOT/base_srs_install@001                 299K      -  4.48G  -
rpool/ROOT/base_srs_install/var                14.4M   204G  1.47G  /tmp/.liveupgrade.442.548/lu_zone_update.548/var
rpool/ROOT/base_srs_install/var@001               1K      -  1.47G  -
rpool/ROOT/servername_srs_1                        384M   204G  6.37G  /tmp/.liveupgrade.442.548/lu_zone_update.548
rpool/ROOT/servername_srs_1@001                   62.2M      -  6.37G  -
rpool/ROOT/servername_srs_1/var                    244M   204G  1.69G  /tmp/.liveupgrade.442.548/lu_zone_update.548/var
rpool/ROOT/servername_srs_1/var@001               10.5M      -  1.48G  -
rpool/ROOT/servername_srs_2                       10.5G   204G  6.47G  /
rpool/ROOT/servername_srs_2@base_srs_install      75.0M      -  4.37G  -
rpool/ROOT/servername_srs_2@servername_srs_1         71.3M      -  4.47G  -
rpool/ROOT/servername_srs_2@servername_srs_2         67.2M      -  6.37G  -
rpool/ROOT/servername_srs_2@001                   76.7M      -  6.38G  -
rpool/ROOT/servername_srs_2/var                   3.75G   204G  3.32G  /var
rpool/ROOT/servername_srs_2/var@base_srs_install  23.8M      -  1011M  -
rpool/ROOT/servername_srs_2/var@servername_srs_1     14.8M      -  1.46G  -
rpool/ROOT/servername_srs_2/var@servername_srs_2     34.7M      -  1.48G  -
rpool/ROOT/servername_srs_2/var@001               25.3M      -  3.33G  -
rpool/dump                                     1.00G   204G  1.00G  -
rpool/dump@001                                   16K      -  1.00G  -
rpool/export                                   4.84M   204G    32K  /export
rpool/export@001                                 18K      -    32K  -
rpool/export/home                              4.80M   204G  4.61M  /export/home
rpool/export/home@001                           194K      -  4.61M  -
rpool/swap                                     12.7G   212G  4.16G  -
rpool/swap@001                                     0      -  4.16G  -
root@servernamesr02 # format -e
Searching for disks...done
 
AVAILABLE DISK SELECTIONS:
       0. c1t0d0 <DEFAULT cyl 30397 alt 2 hd 255 sec 63>
          /pci@0,0/pci108e,534b@5/disk@0,0
       1. c1t1d0 <DEFAULT cyl 30397 alt 2 hd 255 sec 63>
          /pci@0,0/pci108e,534b@5/disk@1,0
       2. c2t0d0 <DEFAULT cyl 60797 alt 2 hd 255 sec 252>  usb-ext
          /pci@0,0/pci108e,534b@2,1/storage@7/disk@0,0
Specify disk (enter its number):2
Specify disk (enter its number)[0]: 2
selecting c2t0d0: usb-ext
[disk formatted]
partition> pr
Volume:  usb-ext
Current partition table (original):
Total disk cylinders available: 60797 + 2 (reserved cylinders)
Part      Tag    Flag     Cylinders         Size            Blocks
  0 unassigned    wm       0                0         (0/0/0)              0
  1 unassigned    wm       0                0         (0/0/0)              0
  2       root    wm       1 - 60796        1.82TB    (60796/0/0) 3906750960
  3 unassigned    wm       0                0         (0/0/0)              0
  4 unassigned    wm       0                0         (0/0/0)              0
  5 unassigned    wm       0                0         (0/0/0)              0
  6 unassigned    wm       0                0         (0/0/0)              0
  7 unassigned    wm       0                0         (0/0/0)              0
  8       boot    wu       0 -     0       31.38MB    (1/0/0)          64260

Many thanks again,
George

Days to write 16 GB looks like a USB 1.1 transfer. I would recommend using something faster. You might also destroy the swap snapshot (4.16 GB) as there is no point doing a backup of it.

Finally, if you really want to keep trying to work out a ZFS backup/restore of your system despite the bad experience you had, please experiment with a test system running the same Solaris release and that you can rebuild from scratch until you are familiar and comfortable enough with the process.

You might also have a look to this document:
ZFS Troubleshooting Guide - Siwiki