Restore of Netapp FC lun targets used as the disks for a zpool with exported zfs file systems

So,

We have a Netapp storage solution. We have Sparc T4-4s running with LDOMS and client zones in the LDOMS, We are using FC for storage comms. So here's the basic setup

FC luns are exported to the primary on the Sparc box. using LDM they are then exported to the LDOM using vdisk. at the LDOM level zpools are created on the vdisks. zfs fs are then created and provided the zone as a dataset and the zfs mountpoint set to a local path in the zone.

here's the rub/question. What is the procedure to restore the FC Lun on the Netapp from a snapshot without having to reboot the zone?

if I do it without rebooting the zone, zpool in the LDOM shows the LUN as degraded/corrupt.

ideas?

Reverting storage snapshot on live filesystem is not possible.
Or any snapshot as far as that goes.

You will need to export the zpool and then restore storage snapshot and import the zpool.
No reboot should be required, unless it is a root zpool, then you will have to power off the ldom, revert, power on.

Is there a reason you are not using builtin zfs snapshoting ? It is much more flexible and a lot of things can be done on live systems (no need to export the zpool), but you will still need to stop the services which are using those filesystems.

Hope that helps
Regards
Peasant.

1 Like

The reasons are numerous and lengthly, but the TL,DR is infrastructure and DR.

With reference to your reboot comment: This is a zpool with exported datasets to a non-global zone. You can't export the zpool unless the dataset isn't being used (i.e. the zone with the dataset assigned is offline) and thus the requirement for the zone reboot.

I'm looking into replacating the functionality of time-slider with scripts and crons.

I hope you're not using hardware snapshots to replicate live filesystems now.

You seem to be thinking of using zfs snapshots. That would be much better, and not hard to implement.

Just remember to disable the SSH escape character if you use something like

zfs send ... | ssh -e none  ... zfs receive ... 

yes, we are currently using netapp managed snapshots for backup and recovery.

however after this conversation and other reading on online. specificially this

Automatic ZFS Snapshot Rotation on FreeBSD | Thinking Sysadmin

I've modified that code to the code below:

#!/usr/bin/bash
# Path to ZFS executable:
ZFS=/usr/sbin/zfs
# Parse arguments:
TARGET=$1
SNAP=$2
COUNT=$3
mount=`$ZFS get -H -o value mountpoint $TARGET`
# Function to display usage:
usage() {
    scriptname=`/usr/bin/basename $0`
    echo "$scriptname: Take and rotate snapshots on a ZFS file system"
    echo
    echo "  Usage:"
    echo "  $scriptname target snap_name count"
    echo
    echo "  target:    ZFS file system to act on"
    echo "  snap_name: Base name for snapshots, to be followed by a '.' and"
    echo "             an integer indicating relative age of the snapshot"
    echo "  count:     Number of snapshots in the snap_name.number format to"
    echo "             keep at one time.  Newest snapshot ends in '.0'."
    echo
    exit
}
# Basic argument checks:
if [ -z $COUNT ] ; then
    usage
fi
if [ ! -z $4 ] ; then
    usage
fi
# Snapshots are number starting at 0; $max_snap is the highest numbered
# snapshot that will be kept.
max_snap=$(($COUNT -1))
# Clean up oldest snapshot:
if [ -d /$mount/.zfs/snapshot/$SNAP.$max_snap ] ; then
    $ZFS destroy -r $TARGET@$SNAP.$max_snap
fi
# Rename existing snapshots:
dest=$max_snap
while [ $dest -gt 0 ] ; do
    src=$(($dest - 1))
    if [ -d /$mount/.zfs/snapshot/$SNAP.$src ] ; then
    $ZFS rename -r $TARGET@$SNAP.$src $TARGET@$SNAP.$dest
    fi
    dest=$(($dest - 1))
done
# Create new snapshot:
$ZFS snapshot -r $TARGET@$SNAP.0

and it appears to be working quite nicely.