How to backup when there is no tape available?

Hi All!

Let�s say there are a few solaris servers connected to a NetApp storage device, but with no tape library or tape device attached to none of them. Assuming the O.S. is installed on the root disks, how to recover if the O.S. failed in one of the systems? Please share your ideas.

:stuck_out_tongue: Buy a tape drive :stuck_out_tongue:

Okay, seriously, you can use something equivalent to NIM (AIX) or Kickstart (RHEL) I think for Solaris, it's called Jumpstart, but I'm not sure how flexible it is and whether your can restore your OS with all your customisations, or if it's just an installation tool. At worst, perhaps you could store an fsdump (or is it ufsdump) somewhere on another server, so after a basic install and network config, you could restore from that. The problem may be storage space on disk though. You would need a dump of each filesystem on the boot disks and information to create suitable partitions. It's been over 10 years since I had a Solaris server (it was Sol 2.6!) so I can't remember all the things we had to do for our DR, but that was using 8mm tape.

Something else I'm aware of, but have not had time to investigate yet is a product called Storix which seems to offer network-based equivalent to the AIX mksysb It says it allows you to restore to dissimilar hardware, or even virtualise (or return to physical, or move from one virtual host to another, e.g VMWare to Zen) Apparently it is available for most OS, including Solaris 9, 10, 11 & 11/11.

Storix Solaris Support

Like I said, I haven't spent time on it, but the glossy management information seems good. If anyone has experience of it, I'd be grateful of any comments before I get into it myself.

Robin
Liverpool/Blackburn
UK

1 Like

With no tape available the obvious answers are USB connected storage or backing up to NFS storage on another node.

What version of Solaris is it? UFS or ZFS filesystem?

Compression of the output on a pipe (|bzip2 -9 >mount_path) helps with size and speeds network or other media speed. Before any media limits, generally compress is faster than gzip -1 is faster than gzip -9 is faster than bzip2 -1 is faster than bzip2 -9. There are a lot of tools out there:
Lossless compression - Wikipedia, the free encyclopedia
Compression Tools Compared | Linux Journal

Another way to get compression implicitly is to mount a zip as a file system. All writing to the zip, wherever it is, is compressed data, no pipes needed. Google Code Archive - Long-term storage for Google Code Project Hosting.

1 Like

I have stopped using Solaris. But while I did, I used flar to do bare metal installs. I would install the OS, my favorite patch bundles, Freeware, and local utilities. Then I would do a flarcreate. The resulting flar gets stored on a NFS server (netapp in my case). Then when I need it, I would just do a Solaris install, and pick the options to use an NFS mounted flar image.

I remember reading that Oracle planned to drop flarcreate. Don't know if they did. Also don't know if a replacement utility is available.

flarcreate is still there for Solaris 11.1 - I use it.

My interpretation of the OP was how to quickly recover the O/S (slice or pool) if the system won't boot not how to backup and recover the whole system. Hence my questions. Perhaps I misunderstood the question.

I just want to backup whatever file system is in the root disks, just in case those fail. I use

solaris 10

---------- Post updated at 01:50 PM ---------- Previous update was at 01:49 PM ----------

its solaris 10

ufs

Solaris 10, yes, and is the root filesystem ufs or zfs?

---------- Post updated at 12:37 PM ---------- Previous update was at 11:51 AM ----------

Right then, I get the question; there could be many answers and every professional could have a different opinion. You asked for ideas to be shared so here goes.

The scenario I've faced many times is that I have a very big system with lots of non-root filesystems and tons of storage. Minimum down time is critical but suddenly the system won't boot. I just want to get the system on its feet so that I can take a look around.

Backup:

  1. Create a NFS share on a remote system and mount it
  2. Pick a fairly quiescent time and 'fssnap' the root filesystem (to freeze it temporarily) sending the journal ('backing_store' switch on fssnap) to one of your other local filesystems.
  3. Run 'ufsdump' to backup the whole filesystem to the NFS storage.
  4. Make a note of your ip interface name (eg, e1000g0 or whatever)
  5. Make a note of all the VTOC's

Note that you need to gauge the frequency of doing the backup because, in the event of a recovery, new users, groups, security changes and patches will be mssing.

Recovery:
Suddenly the system won't boot so.....

  1. Boot from CD into single user:
boot cdrom -s
  1. Use 'format' to check disk visibility and check slicing of disks.
  2. After confirming that local recovery (eg, fsck, etc) will not fix the issue, 'newfs' the root disk slice. Root filesystem is now empty.
  3. Mount the new empty root filesystem under /a
  4. Use 'ifconfig' to manually 'plumb', 'address' and 'up' your network interface.
  5. Check that you can ping the NFS node holding your ufsdump(s)
  6. Mount the remote NFS storage under /mnt
  7. Change directory to the top of your hard disk (empty) root
  8. 'ufsrestore' the backup from the NFS storage to the root hard disk
  9. 'sync' and 'umount' the NFS storage and the root hard disk and do an orderly shutdown.
  10. System should now boot.

Note that if your /usr filesystem is separate from the root filesystem you should consider backing up that for emergency recovery too since without being able to mount /usr the recovered system will probably go into maintenance if it cannot mount /usr

As I say, all professionals have their own opinion and you may well get a torrent of alternatives posted to this thread. You may also have further question about what I have written above. Feel free to ask.

You could, of course, use 'flarcreate' to create a flash of just your root disk filesystem and that is certainly a good option. The above is just the method that I have used on Solaris 10 with ufs. You can, of course, test your recovery procedure by using a dummy root slice elsewhere on the system (not the real one).

Hope that helps.

2 Likes

Thanks a lot for your input. I will investigate the

flarcreate

command.
As an example, the above is what I want to backup, in order to recover if the root disk fail:

bash-3.00# df -h
Filesystem             size   used  avail capacity  Mounted on
/dev/dsk/c1t0d0s0      9.8G   493M   9.3G     5%    /
/devices                 0K     0K     0K     0%    /devices
ctfs                     0K     0K     0K     0%    /system/contract
proc                     0K     0K     0K     0%    /proc
mnttab                   0K     0K     0K     0%    /etc/mnttab
swap                    15G   1.7M    15G     1%    /etc/svc/volatile
objfs                    0K     0K     0K     0%    /system/object
sharefs                  0K     0K     0K     0%    /etc/dfs/sharetab
/dev/dsk/c1t0d0s3      9.8G   3.6G   6.1G    38%    /usr
/platform/sun4u-us3/lib/libc_psr/libc_psr_hwcap1.so.1
                       9.8G   493M   9.3G     5%    /platform/sun4u-us3/lib/libc_psr.so.1
/platform/sun4u-us3/lib/sparcv9/libc_psr/libc_psr_hwcap1.so.1
                       9.8G   493M   9.3G     5%    /platform/sun4u-us3/lib/sparcv9/libc_psr.so.1
fd                       0K     0K     0K     0%    /dev/fd
/dev/dsk/c1t0d0s4      9.8G   108M   9.6G     2%    /var
/dev/dsk/c1t0d0s5       49G    80M    49G     1%    /tmp
swap                    15G    56K    15G     1%    /var/run
/dev/dsk/c1t0d0s6      9.8G    38M   9.7G     1%    /opt
bash-3.00#

ufsdump will take a complete backup of the slice, everything, exactly as it is facilitating an exact recovery to that position, so includes the likes of /platform, /dev, et al.

Full recovery of the hard disk root filesystem slice will make the O/S bootable (except for the note I made about /usr). The system will then be bootable so you can look at all the non-root (non-O/S) stuff to decide whether non-root's are repairable or also need to be recovered.

Your post shows that your /usr is, indeed, separate, so you should ufsdump that too. In the event of an emergency recovery, ufsrestore the root slice first and then, if it goes into maintenance, ufsrestore the /usr slice too.

So is this a single disk server then? It would seem so.

It might be worth taking a partition map too to aid your recovery if you have to slice a replacement disk. If you run format you will probably get a list of one disk, c1t0s0. Select it, then go for Partition, then Print. This detail can be useful for ensuring your restore will fit the space you might need to reallocate following a completely failed drive.

Keep it with your dump files on another server, and whilst you are at it, keep more than one copy and refresh them regularly.

Some questions that remain:-

  • Do you have the space to do this?
  • Are you planning on storing server A on server B and Server B on server A?

If so, then the dump files of server A will get included in the dump of server B and written back to server A, then the next backup of server A will get the image of server B including the images of server A, so these would grow exponentially.

Have you got a server or pair available just to hold the dumps of all the others? Although it was a little flippant, my suggestion is still to buy a tape drive. That way, you can get your copies off-site in case you lose the building to fire/flood/engineering/power cut etc.

Robin

@rbatte1.....I was assuming there was no tape on this one system not that there was no tape on the site.

@fretagi.......what is the site backup strategy?

For this particular server which I am still building it, I will have all its file systems on a NetApp, and it will host a oracle database. A full backup of the database will be required at the end of each month, but I am not sure about the end of the week.