How to backup ZFS filesystems to files on USB drive?

Dear Solaris 10 Experts,
I need to carry out a manual backup of all zpool/zfs filesystems on a Solaris 10 x86 server in order to port it onto VM Ware hardware but don't know how
to do it. Past exercises have been smooth using ufsdump & ufsrestore for small standalone servers. Below are the zfs filesystems that need to be backed up:

 
rpool/ROOT/ibm_srs_2 228G 6.6G 208G 4% /
/devices 0K 0K 0K 0% /devices
ctfs 0K 0K 0K 0% /system/contract
proc 0K 0K 0K 0% /proc
mnttab 0K 0K 0K 0% /etc/mnttab
swap 7.1G 1.0M 7.1G 1% /etc/svc/volatile
objfs 0K 0K 0K 0% /system/object
sharefs 0K 0K 0K 0% /etc/dfs/sharetab
/usr/lib/libc/libc_hwcap2.so.1
215G 6.6G 208G 4% /lib/libc.so.1
fd 0K 0K 0K 0% /dev/fd
rpool/ROOT/ibm_srs_2/var
228G 3.4G 208G 2% /var
swap 7.1G 2.0M 7.1G 1% /tmp
swap 7.1G 828K 7.1G 1% /var/run
rpool/export 228G 32K 208G 1% /export
rpool/export/home 228G 4.6M 208G 1% /export/home
rpool 228G 49K 208G 1% /rpool

I am very to ZFS and having been Googling but could not find a simple straigt forward commands to backup ZFS filesystems into files onto external USB
drive to be transfer to VM Ware server for hardware migration purposes.
Your advice / referral would be very much appreciated.
Many thanks,
George

Sending and Receiving ZFS Data - Oracle Solaris ZFS Administration Guide

1 Like

Thank you to Soulman for offering your referral,

What is the zfs send syntax to backup the following filesystems followed by compression:

rpool/ROOT/ibm_srs_2 228G 6.6G 208G 4% /
zfs send rpool/ROOT/ibm_srs_2 | gzip > rpool_ROOT_ibm_srs_2.gz?

/usr/lib/libc/libc_hwcap2.so.1 215G 6.6G 208G 4% /lib/libc.so.1
zfs send /usr/lib/libc/libc_hwcap2.so.1 | gzip > usr_lib_libc_libc_hwcap2.so.1.gz?

rpool/ROOT/ibm_srs_2/var     228G 3.4G 208G 2% /var
zfs send rpool/ROOT/ibm_srs_2/var | gzip > rpool_ROOT_ibm_srs_2_var.gz?

rpool/export 228G 32K 208G 1% /export
zfs send rpool/export | gzip > rpool_expert.gz?
 
rpool/export/home 228G 4.6M 208G 1% /export/home
zfs send rpool/export/home | gzip > rpool_export_home.gz?

rpool 228G 49K 208G 1% /rpool
zfs send rpool | gzip > rpool.gz?

Thanks again,

George

Well, I don't understand the question?! As far as I can see the provided document contains all the needed information?!

Sending and Receiving ZFS Data - Oracle Solaris ZFS Administration Guide

1 Like

Some comments:

No need to backup libc_hwcap2.so.1 which isn't a zfs file system anyway.

You need to create a snapshot before sending datasets.

Instead of sending each file system separately, send recursively from the top dataset (and create a recursive snapshot to ensure consistency).

Instead of sending to a local file, send the stream to another machine where you run "zfs receive". This will guarantee the backup wasn't corrupted during the transport.

If you want to compress your data, use ZFS own compression capabilities instead of relying of an external gzip command.

If you are short on network bandwidth, you might compress the datastream before sending it but you'll need to uncompress it on the other side before receiving it.

Use incremental send/receive for subsequent backups.

1 Like

Hi jlliagre,

Thanks for your general advice but I would very much appreciate for more specific commands. Below are the things that I have tried but still a long way to getting a proper backup onto USB (UFS filesystem) currently, while in single user mode:

# Create recursive snapshot of local zpool called rpool
zfs snapshot -r rpool@001
# Want to create an archived zfs backup file with compression on USB
# drive mounted on mnt folder
zfs send -p rpool@001 > /mnt/hostname_rpool@001.snapshot1

Which returns the prompt immediately with only a tiny file generated. Alternatively solution is to use the following command to create snapshot on to the USB drive with zfs receive:

zfs send -p rpool@001 | zfs recv -d mnt

However, do I need to create a zpool filesystem on USB drive first? How to do that and also the drive is currently in UFS format? Don't tell me that I need to slice up the USB or re-create it in zfs filesystem.

ZFS backup is much more complex than ufsdump/ufsrestore in the past. I simply want a physical backup file of everything under rpool that can be transfered to another generic Solaris 10 VM, to be restored onto.

Much appreciate your detail comments & patience.

George

You are missing the zfs send -R option.

Have look to the ZFS administration guide for a root pool backup and restore example.

2 Likes

Hi DukeNuke2,
I have finally being able to create a recursive snapshot of all the filesystems using the following recommended command:

 
# zfs list -r rpool
# zfs snapshot -r rpool@001
# zfs send -R -p rpool@001 > /mnt/hostname_rpool@001.snapshot1

Where /mnt is the USB drive that is in UFS filesystem format. hostname_rpool@001.snapshot1 has been copied to /export/home and I need to get
the following guidance:

( i ) List table of content of this archive
      How to list the TOC of hostname_rpool@001.snapshot1?
( ii ) Create zpool called backups on the same USB drive
       zpool create backups c0t0d0s2 (USB)
       zpool create backups/usbdrive
       cannot mount 'backups/usbdrive': failed to create mountpoint
       filesystems successfully created, but not mounted
 
( iii ) Extract the content from this archive on to zpool backups
        cat hostname_rpool@001.snapshot1 | zfs recv -d backups/usbdrive

I am stuck in step ( ii ) and need confirmation on whether command in ( iii ) will work. Also like to confirm that compression has already been used
when creating the snapshot.
It's a very slow process and I can only do it with your valuable helps.
Thanks a lot,
George

You are not compressing anything in the described procedure.

Creating an intermediary file (hostname_rpool@001.snapshot1) is an unnecessary step, you could (and should) directly pipe the "zfs send" command to a "zfs receive" one.

You are destroying your USB file system by creating a zfs pool on the very same device. Assuming you have no free partition on your USB disk, You need to create a file based pool.

There are also specific options you need to use to import a root pool as otherwise, some of the properties, especially mount points, will collide with your current root pool.

1 Like

Hi jlliagre,

I have finally being able to create a recursive snapshot of all the filesystems using the following recommended command:

# zfs list -r rpool
# zfs snapshot -r rpool@001
# zfs send -R -p rpool@001 > /mnt/hostname_rpool@001.snapshot1

Where /mnt is the USB drive that is in UFS filesystem format. hostname_rpool@001.snapshot1 has been copied to /export/home and I need to get the following guidance:

( i ) List table of content of this archive
      How to list the TOC of hostname_rpool@001.snapshot1?
( ii ) Create zpool called backups on the same USB drive
       zpool create backups c0t0d0s2 (USB)
       zpool create backups/usbdrive
       cannot mount 'backups/usbdrive': failed to create mountpoint
       filesystems successfully created, but not mounted
       
( iii ) Extract the content from this archive on to zpool backups
        cat hostname_rpool@001.snapshot1 | zfs recv -d backups/usbdrive

I am stuck in step ( ii ) and need confirmation on whether command in ( iii ) will work. Also like to confirm that compression has already been used
when creating the snapshot.
It's a very slow process and I can only do it with your valuable helps.
Thanks a lot,
George

Why are you posting the very same questions after I answered to them ?

Hi jlliagre,

Sorry for posting the same questions twice since I panicked at the thought that my update was lost without realizing that it was on the next page.

I have already destroyed the UFS filesystem on USB drive with the following result which is fine:

# zfs list �r backups
NAME                         USED  AVAIL            REFER            MOUNTPOINT
backups                      131K   1.78T             31K               /mnt
backups/usbdrive              31K   1.78T             31K              /mnt/usbdrive
 
# zfs send �R �p rpool@001 | zfs recv �d backups/usbdrive

This is still going after running for more than 4hrs. My understanding is that this step will transfer a snapshot of all the data from local rpool & their datasets to backups zpool on USB drive. Is this correct? However, my intention has always been to create a snapshot of local rpool recursively into a single file with the following command:

# zfs send �R �p rpool@001 | zfs recv �d > /export/home/hostname_rpool@001.zdump

Is this command correct and how to include compression in the process as well? Also, how to list toc of /export/home/hostname_rpool@001.zdump after successful creation?

Thanks so much again,

George

[quote="gjackson123,post:12,topic:339574"]

 
# zfs send -R -p rpool@001 | zfs recv -d backups/usbdrive

This is dubious, as I already wrote, you need specific options to import a root pool, at least a different root mountpoint.

There are many factors that might affect the performance of such a command.

Yes, but I suspect there will be errors because you'll end up with multiple file systems with the same mount point.

No. If you want to create a file, just redirect zfs send to it. This is what the link to the documentation I posted does.

You might just pipe through gzip.

There is no way I'm aware of to list the content of such a datastream, moreover, as it usually contains volumes and snapshot, it won't be as stratforward as with a tar or cpio of ufsdump.
The only way to make sure a stored datastream is valid and list the files it contains is to extract (receive) it somewhere.

1 Like

Hi jlliagre,

I tried to boot up the local Solaris 10 x86 operating system instead of from installation disk all along but encountered the following error:

cannot mount �/export�: directory is not empty
svc:/system/filesystem/local:default:WARNING:/usr/sbin/zfs mount �a failed: exit status 1
Nov 15 10:50:17 svc.startd[10]:svc:/system/filesystem/local: default: Method �/lib/svc/method/fs-local� failed with exit status 95.
svc.startd[10]: system/filesystem/local: default failed fatally: transitioned to maintenance

The bootup process is running �zfs mount �a� which generates the same error when running it manually (cannot mount �/export�: directory is not empty), but I don�t know which table (/etc/.../*tab?) that it is looking up to get �/export� and yet I can see this filesystem has already been mounted using �df �h� as follows:

 
rpool/export/home        228G   4.6M    204G   1%       /export/home
rpool                     228G     49K   204G   1%       /rpool

So I don�t understand where the issue is with mount /export which is preventing the system from booting into multi-user mode.

Not only have I not being able to create a zfs snapshot but appears to have disturbed possibly original mount point of local rpool as well.

Any idea on what to do?

Once again, I can only fix this issue with your persistent urgent advice.

Also, can I continue to generate zfs snapshot with zfs send -R -p rpool@001 > /mnt/usbdrive/hostname_rpool@001 once the system is running in multi-user mode? ie rpool is mounted.

Thanks in advance,

George

Just read again this thread fom the beginning.

Creating snapshots cannot have any adverse effects.

Importing a root pool without taking specific precautions will create trouble, which is what you are experiencing.

In post 9 I wrote:

In post #13 I wrote:

and

Yes, but I suspect there will be errors because you'll end up with multiple file systems with the same mount point.
1 Like

Hi jlliagre,

Can you offer any advice on what to troubleshoot to get out of this hole? Fortunately, I managed to boot it up from another partition which also has rpool and it is working. However, it is a different system which I still couldn't get in as root. Nevertheless, I need to find out more about the differences with this rpool setup compared to the previous one that I have backed up, ended up not able to boot it up to multi-user mode due to something in export folder.

Some people have advised me to remove home folder (copy it to some where first) in export before rebooting but I hesitate of not being locked out of the system altogether.

I am very much dependent on your guidance from here.

Many thanks again,

George

It is not easy without knowing precisely what you did. I would probably start by destroying backups/usbdrive, if this is where you imported the stream.

Ok jlliagre,
I am so glad you haven't deserted me now that we have come to a crunch. Let me list out as much commands recalled used to carry out the backup:

 
# zfs list -f rpool
# zfs snapshot -r rpool@001
# zpool create -f -R backups c0t0d0s2 (USB)
# zfs create -f  backups/usbdrive
# zpool import -R backups
# zfs list -r backups
# zpool status -R -v backups
# zfs send -R -p rpool  > /mnt/usbdrive/servername_sr02_rpool@001.snapshot1

Below is the zpool output of an alternative multi-boot system (servername_s1) that I have ended up booting into on the same box, the only one I can boot up successfully into yet couldn't work out what the root password is still:

 
-bash-3.2$ hostname
servername_sr02
-bash-3.2$ uname -a
SunOS servername_sr02 5.10 Generic_147441-09 i86pc i386 i86pc
-bash-3.2$ df -h
Filesystem             size   used  avail capacity  Mounted on
rpool/ROOT/servername_srs_1  228G   6.4G   204G     4%    /
/devices                0K     0K     0K     0%    /devices
ctfs                    0K     0K     0K     0%    /system/contract
proc                    0K     0K     0K     0%    /proc
mnttab                  0K     0K     0K     0%    /etc/mnttab
swap                   8.2G   1.0M   8.2G     1%    /etc/svc/volatile
objfs                   0K     0K     0K     0%    /system/object
sharefs                 0K     0K     0K     0%    /etc/dfs/sharetab
rpool/ROOT/servername_srs_1/var 228G   1.6G   204G     1%    /var
swap                   8.2G   1.9M   8.2G     1%    /tmp
swap                   8.2G   376K   8.2G     1%    /var/run
rpool/export           228G    32K   204G     1%    /export
rpool/export/home      228G   4.6M   204G     1%    /export/home
rpool                  228G    49K   204G     1%    /rpool
/vol/dev/dsk/c0t0d0/sol_10_811_x86
-bash-3.2$ /sbin/zpool status -v rpool
  pool: rpool
 state: ONLINE
 scan: resilvered 9.59G in 0h12m with 0 errors on Wed Mar 14 12:09:12 2012
config:
        NAME         STATE     READ WRITE CKSUM
        rpool       ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
          c1t1d0s0  ONLINE       0     0     0
          c1t0d0s0  ONLINE       0     0     0
errors: No known data errors
-bash-3.2$ ./zfs list -r rpool
NAME                                            USED  AVAIL  REFER  MOUNTPOINT
rpool                                          24.4G   204G    49K  /rpool
rpool@001                                        30K      -    49K  -
rpool/ROOT                                     10.7G   204G    31K  legacy
rpool/ROOT@001                                     0      -    31K  -
rpool/ROOT/10                                  14.1M   204G  4.37G  /tmp/.liveupgrade.278.554/lu_zone_update.554
rpool/ROOT/10@001                               299K      -  4.37G  -
rpool/ROOT/10/var                              6.50M   204G  1011M  /tmp/.liveupgrade.278.554/lu_zone_update.554/var
rpool/ROOT/10/var@001                             1K      -  1011M  -
rpool/ROOT/base_srs_install                    38.2M   204G  4.48G  /tmp/.liveupgrade.278.554/lu_zone_update.554
rpool/ROOT/base_srs_install@001                 299K      -  4.48G  -
rpool/ROOT/base_srs_install/var                14.4M   204G  1.47G  /tmp/.liveupgrade.278.554/lu_zone_update.554/var
rpool/ROOT/base_srs_install/var@001               1K      -  1.47G  -
rpool/ROOT/servername_srs_1                     189M   204G  6.37G  /
rpool/ROOT/servername_srs_1@001                3.29M      -  6.37G  -
rpool/ROOT/servername_srs_1/var                 112M   204G  1.56G  /var
rpool/ROOT/servername_srs_1/var@001            10.5M      -  1.48G  -
rpool/ROOT/servername_srs_2                    10.5G   204G  6.47G  /tmp/.liveupgrade.278.554/lu_zone_update.554
rpool/ROOT/servername_srs_2@base_srs_install   75.0M      -  4.37G  -
rpool/ROOT/servername_srs_2@servername_srs_1   71.3M      -  4.47G  -
rpool/ROOT/servername_srs_2@servername_srs_2   67.2M      -  6.37G  -
rpool/ROOT/servername_srs_2@001                13.2M      -  6.38G  -
rpool/ROOT/servername_srs_2/var                3.74G   204G  3.33G  /tmp/.liveupgrade.278.554/lu_zone_update.554/var
rpool/ROOT/servername_srs_2/var@base_srs_install  23.8M      -  1011M  -
rpool/ROOT/servername_srs_2/var@servername_srs_1  14.8M      -  1.46G  -
rpool/ROOT/servername_srs_2/var@servername_srs_2  34.7M      -  1.48G  -
rpool/ROOT/servername_srs_2/var@001               3.36M      -  3.33G  -
rpool/dump                                     1.00G   204G  1.00G  -
rpool/dump@001                                   16K      -  1.00G  -
rpool/export                                   4.83M   204G    32K  /export
rpool/export@001                                   0      -    32K  -
rpool/export/home                              4.80M   204G  4.61M  /export/home
rpool/export/home@001                           192K      -  4.61M  -
rpool/swap                                     12.7G   213G  4.16G  -
rpool/swap@001                                     0      -  4.16G  -
-bash-3.2$ ls -lt /export
total 3
drwxr-xr-x   3 root     root           3 Nov 16 00:08 home
-bash-3.2$ cd /export
-bash-3.2$ ls
home
-bash-3.2$ ls -lt
total 3
drwxr-xr-x   3 root     root           3 Nov 16 00:08 home
-bash-3.2$ cd home
-bash-3.2$ ls -lt
total 3
drwxr-xr-x  15 support  sys           23 Nov 16 19:02 support
-bash-3.2$ cd support
-bash-3.2$ ls -lt
total 6
drwxr-xr-x   2 support  sys            3 Mar 10  2012 Desktop
drwxr-xr-x   2 support  sys            2 Mar 10  2012 Documents

I have taken the following action in an attempt to resolve the mount issue of � /export' by removing home under /export mount point which is preventing �/export' from mounting for servername_sr02:

 
- Boot up in single user mode & login as root
- Unmount /export/home
- Check the /export directory (mount point) 
- Remove the /export/home directory
- Reboot but found another set of similar error as follows:
cannot mount �/export': directory is not empty
svc:/system/filesystem/local:default:WARNING:/usr/sbin/zfs mount -a failed: exit status 1
Nov 15 10:50:17 svc.startd[10]:svc:/system/filesystem/local: default: Method �/lib/svc/method/fs-local� failed with exit status 95.
svc.startd[10]: system/filesystem/local: default failed fatally: transitioned to maintenance
ERROR: svc:/system/filesystem/minimal: default faild to mount /var/run (see �svc -x' for details)
Nov 18 00:44:06 svc.startd[10]:svc:/system/filesystem/minimal:default: Method /lib/svc/method/fs-minimal � failed with exit status 95.
Nov 18 00:44:06 svc.startd[10]:svc:/system/filesystem/minimal:default: failed fatally transitioned to maintenance...
Requesting System Maintenance Mode
Console login service(s) cannot run
Root password for system maintenance (Control-d to bypass)

( 1 ) Why am I encountering yet another svc error when booting into servername_sr02? I am still not clear whether the same rpool is used by both of the following boot up filesystems/partitions in Grub:

 
title servername_srs_1
findroot (BE_servername_srs_1,0,a)
bootfs rpool/ROOT/servername_srs_1
kernel$ /platform/i86pc/multiboot -B $ZFS-BOOTFS
module /platform/i86pc/boot_archive
title servername_srs_1 failsafe
findroot (BE_servername_srs_1,0,a)
bootfs rpool/ROOT/servername_srs_1
kernel /boot/multiboot -s
module /boot/amd64/x86.miniroot-safe
title servername_srs_2
findroot (BE_servername_srs_2,0,a)
bootfs rpool/ROOT/servername_srs_2
kernel$ /platform/i86pc/multiboot -B $ZFS-BOOTFS
module /platform/i86pc/boot_archive
title servername_srs_2 failsafe
findroot (BE_servername_srs_2,0,a)
bootfs rpool/ROOT/servername_srs_2
kernel /boot/multiboot -s
module /boot/amd64/x86.miniroot-safe

( 2 ) Can you confirm whether I am still using the same rpool on both boot up filesystems? They appear to have a different ROOT �/' filesystem and hence explains why I am not able to su in as root with the original password on the failed servername_srs_2.
( 3 ) I wouldn't mind staying in the current servername_srs_1 boot up filesystem provided that I can hack in as root.
( 4 ) Also can longer boot up from Solairs 10 x86 DVD installation disk after having inadvertently reset boot up firmware. Now the system would go ........................... and not go any further.
Hope this update hasn't confused you completely.
As always, thank you so much for stick around,
George

The third command is syntactically incorrect :

zpool create �f �R backups c0t0d0s2

Beyond the incorrect hyphens, the -R option is missing a parameter.
Did you really use the -R option, if yes, with what parameter which is missing here ?
Are you sure c0t0d0 was the USB drive ?
Why did you use the -f option ?

Hi Jlliagre,

I think the missing parameter was the /mnt. The full syntax would be:

zpool create -f -R /mnt backups c0t0d0s2

Where -R is to create an alternative zpool on the same system according to my understanding.

I am absolutely sure that disk c0t0d0 is for the USB drive. Used format -e and found it to have 1.9TB size as opposed to the other 2 around 239GB.

I used -f with the intention to force the zpool creation after it failed without it. Nevertheless, this was only my recollection and it may not be the exact command used. It was unfortunate that I couldn't get into the text mode console at the time to copy-paste every step.

Btw, what to specify in Grub to boot up in text mode (ttyb.....) so I can SSH to console screen. I normally use the following sequence of step to into single user mode for instance when boot up from Solaris 10 x86 installation DVD disk:

( i ) e (edit)
( ii ) e (edit)
( iii ) append -s after CDROM follow by ENTER
( iv ) b (boot up)

Btw, this Sun Ray server has not had a backup done in the past so I am hesitant to tamper with it apart from hoping to crack its password, which would save a lot of headaches.

I suspect that it would be similar sequence but not sure what the syntax is to boot up from local disk and to text ttyb.... mode.

Thanks again,

George