Ufsrestore

Good Afternoon,

I'm going to attempt a ufsrestore of a Solaris 9 machine from a connected NAS containing the ufsdump s. The idea is to be able to take ufsdump s of a failed machine (machine 1), and use them to set up a backup machine (machine 2). (I'm testing for disaster recovery)

Note that my backup machine (2) has already been imaged by doing a (RAID1) drive replacement on machine 1, then moving that drive to machine 2, then creating the RAID1 mirror drive on machine 2. In other words, the second machine is already a duplicate of the first, just a couple of months out of date.

Is this this what I will use or is there something simpler?:

/usr/sbin/ufsrestore i| r| R| t| x [abcdfhlmostvyLT] [archive_file] [factor] [dumpfile] [n] [label] [timeout] [ filename...]

Normally when a disaster recovery (DR) is performed, there is nothing already installed on the system ie, bare metal. So starting with a mirror image already installed is unorthodox. Also, if the image is old there may be files on there which have now been deleted from the system, etc.

To avoid me having to repeat, read this from one of my post some time ago: (couldn't get a link to work for whatever reason so cut and paste used)

This was referring to Solaris 10 but equally applies to Solaris 9 with ufs.

If the hardware platform you are recovering to is not identical (processor type, disk controllers, network interfaces, etc) then some adjustment may need to be done after ufsrestore like modify /etc/vfstab, modify /etc/system, create new device nodes (/dev/dsk/xxxxx, for new filesystem locations), plumb in new network interfaces, etc.

NOTE: If your /usr filesystem is separate from root then you will need to restore that too otherwise the system will go into maintenance mode when it boots.

There's no substitute whatsoever for actually doing it yourself and asking the questions as you go. We're here to help.

Also, with DR planning, using flarcreate is the more supported means for DR.

1 Like

Hi,

If the target machine is well, "up to date" or close to it.

I would be tempted to run the "ufsrestore" in interactive mode, something like;

#> ufsrestore -ivf {filename}

This will give you the option of selecting individual file or directory structures for restore.

As Hicksd8 states, you'd be better using a "FLAR" for the DR scenario or if possible SAN replication.

Regards

Gull04

1 Like

Thanks..

There will be a point when I would like to be able to start from bare metal, but I'm thinking that I should first focus on this more likely (and presumably easier) scenario, since I literally have no idea how to do a ufs restore. I have verified that the clone I made can reach my NAS (via the switch), mount NAS directories, and I can remote into the clone from other machines.

My

ufsdump

s are of entire disk partitions.

I'm not sure I understand the syntax of

ufsrestore

.
Is it assumed that I will move to the destination machine's appropriate working directory and then execute the command and that the

ufsrestore {filename}

contains the files for that directory? Is it expecting an empty destination directory, should I wipe the existing files or will it overwrite?

Not sure that I understand your exact question but let me elaborate.

(Assume we are restoring the main root (bootable) partition.)

Boot into single user from DVD:

ok> boot cdrom -s

At the # prompt, run format , if necessary label the disk and manipulate the VTOC (Virtual Table of Contents) to the required boundaries (that's partitions in MS talk). Write out the VTOC. Typically, use newfs to create a new ufs filesystem, and mount that new filesystem on /a mountpoint (always available on DVD's).

Change directory to the top of your new filesystem and all you can see is Lost+Found . Other than that, the filesystem is empty.

Now mount your (remote) dump file on /mnt.

Now take your previously created ufsdump file and tip the whole contents onto this new filesystem:

# ufsrestore rf /mnt/<dumpfile>

Hope that helps.

1 Like

Also do note that you will still need to write out a new bootblk to the disk if it hasn't already got one otherwise you cannot boot from that disk. The contents of the bootblk varies with machine architecture. Search this forum for how to do that. It's well documented here already.

OK I'm at this stage. Format and label seemed to have worked fine (though I imagined it would ask me what to label it but it didn't)

I'm unfamiliar with VTOC. Using

prtvtoc /dev/rdsk/c0t0d0s2

(and on other slices) I can see the differences between my new machine and the machine that is the source of the

ufsdump

s

How do I manipulate it? Everything I read seems to point to a pipe into

fmthard

but I suspect that is skipping a step.

Not exactly sure (once again) of your question so I'll just write and hope it helps.

The VTOC is a table showing how the disk is sliced. It always pays to print out all VTOC's of production systems so you know how big each slice is. Of course, it doesn't matter if you configure a new disk with some or all bigger slices than the box that you are trying to recover, important thing is you don't want to restore a filesystem to a slice that is too small and run out of space during the restore and have to start again.

fmthard is just a special recovery command that will read a previously output VTOC file (these are straight ASCII so you can cat them) and create an identical VTOC on the new disk. So the slices are identical which is great but not much good if you are trying to deliberately install bigger slices.

You can manipulate the VTOC within the format command (always taking great care to ensure no slice overlaps another) and then write out that VTOC to the disk.

As far as disk labeling is concerned, usually for Solaris, disks have a SUN label and perhaps the one you have there already has that label on it from previous use. If format can't see a label or it doesn't recognise a label it will surely tell you.

Also, format has an expert mode:

# format -e

which will provide you with more options for things like labeling but also more options that you can screw up if you don't know what you are doing. Try it on your recovery machine first.

Hope that helps. If not, don't hesitate, post your specific questions. There's more than enough fire power on this forum to answer you.

1 Like

Thanks again..

So I was able to make changes using

format

then

partition / print

on both machines, then making the new machine on match the old. However, everything was listed in gigabytes (rounded only to two decimal places), and I think the partitions are more fine grained than that. I had to massage the gigabytes at the 3rd decimal place a little, to make the cylinders and blocks look the same as the source machine.

Some of these are slightly smaller in gigabytes than the source. Will that be an issue? I'm thinking it won't unless the partitions are 100%full.

The

prtvtoc

s seem to match now.

Now what? Mount the NAS that has the

ufsdump

s?

Well presumably you now have a disk slice that should contain the root operating system on the recovery machine.

Whilst still in single user (booted from the DVD) create a new ufs filesystem on that slice by:

# newfs /dev/dsk/<device> (eg, c0t0d0s0)

you can then mount this slice on /a

# mount /dev/dsk/<device> /a

you can now cd to it:

# cd /a

and do a ls -l and you should see just lost+found

Now if your dump file is via nfs you need to use ifconfig to manually plumb, then configure, then up the network interface. Plumb allows the OS to know the interface is there, configure means you set its ip address and subnet mask, and up means you activate that interface, after which you should be able to ping the nfs node.

Once you can do that you can mount the remote node filesystem handle under /mnt:

# mount -F nfs <ip address>:<nfs share handle> /mnt

After that you can ufsrestore the filesystem (ensuring you are still at the apex of your new empty filesystem).

Is that what you were asking?

Making progress.. Thanks again!

I'm up to plumb, configure, up and I've gotten mostly through it but on my cloned machine, when I

ifconfig -a

I see

UP,BROADCAST,NOTRAILERS,MULTICAST,IPv4

whereas my existing machine shows

UP,BROADCAST,RUNNING,MULTICAST,IPv4

How do I get rid of

NOTRAILERS

and add

RUNNING

Also- is it "up?" I never used

ifup

but did use

plumb up

Note- the machine is not currently hooked up to my switch.

You shouldn't need to worry about NOTRAILERS unless you are needing to route packets. I assume that your NFS node containing the ufsdump that you want to restore is on the same subnet??

So let's go through this briefly. You are booted from a distribution install media DVD and are in single user with a # (root) prompt. You now want to connect to your NFS node so you need a working network interface. Let's assume that you need to reach 192.168.1.99 and you are going to set this node's ip address to 192.168.1.11 Of course, you are not compelled to use the actual production ip address of the system, only anything that will allow you to restore your dump file. Once the restore is done, upon reboot the ip address it will come up on will be whatever the system was set to that the dump was taken of. However, ENSURE that the ip address you use for the restore does NOT clash with another system. Assuming you are sure that the ip address you are configuring is unique there's no danger in connecting it to your switch upfront; in fact it's beneficial because you can see immediate results on screen.

So here we go.....(I usually follow some commands with '&&' to display the updated config if successful). Let's assume the interface in question is a bge0......

# ifconfig bge0 plumb
# ifconfig bge0 192.168.1.11 netmask 255.255.255.0
# ifconfig bge0 up && ifconfig bge0
# ping 192.168.1.99

(Note: I know you are asking about Solaris 9. This will work on Solaris 8,9 & 10. Solaris 11 has a different network configuration system.)

Once you can ping your NFS store system you can issue your mount command to get access to your dump file. See one of my previous posts on this thread.

When I need to do this myself whilst executing DR I simply type in the above 4 commands one after the other in quick succession and the interface is up. If that doesn't work for you then tell me what happens (error message please).

Hope that helps.

1 Like

Awesome.. OK I can ping the NAS. Trying:

# mount -F nfs <ip address>:<nfs share handle> /mnt

but do not know the handle. How can I find it? On the other machine I'm looking at

fstab

for it but several things look like they might be it.

Also, I really would like to get rid of

NOTRAILERS

if possible..

Thanks!

Hmmmmm.......if you did the ufsdump from the production box then it's the same handle to recover to the new box. Somebody mounted the remote volume on the NAS to ufsdump to. What handle was used?

# showmount <NAS ip address>

should show you what the NAS is prepared to offer you but it might be more than one handle. Also, depending on what make of NAS, the NAS may require a password for the handle before it allows access.

Who manages the NAS?

Once mounted you should be able to change to that remote directory and list the files available which should include your dumpfile.

Remember to change back to the top of your new (empty apart from lost+found ) filesystem before issuing the ufsrestore command.

Also, the NOTRAILORS won't be there once the system is fully restored and rebooted. It will look just like your production system (if it's on the same model hardware). I suggest that you put up with it for the relatively short time to complete the restore.

1 Like

I think I over explained.. I don't know what "handle" is. If I use

mount

on the existing machine, I see

]<Directory1> on <NAS1:>

yada yada yada. Is NAS1 the handle? Or in that context, what does "handle" look like?

On my existing machine, I used

showmount

and it just showed other IP addresses.

Thanks!

An NFS "handle" is like a 'sharename' in Microsoft speak.

So I would say that NAS1 is not the handle, more likely it is Directory1 of your output.

After you can ping your NAS I would expect:

# showmount <NAS ip address>

e.g.

# showmount 192.168.1.99

to show you what NFS handles are available on that (NAS) ip address.

What says:

# mount <NAS ip address>:<Directory1> /mnt

Try it. You can't do any damage, it will just be refused if it's not right.

Actually it's

showmount -e <NAS ip address>
1 Like

@MadeInGermany......Yes, of course, thanks. Glad somebody is awake!

Yeah I'm getting

Permission denied

Tried looking at

 
 /etc/dfs/dfstab
 /etc/exports

but they don't exist

Where would I put the password?

---------- Post updated at 10:44 AM ---------- Previous update was at 10:40 AM ----------

Also- thanks.. adding

-e

to

showmount

seems to have done the trick.

---------- Post updated at 10:48 AM ---------- Previous update was at 10:44 AM ----------

To clarify- by "done the trick" I mean found the handle... I still have the permission issue..

Before we go looking at potential password issues there are a few things to checkout first.

Currently NFS comes in three versions, 3,4 and 5. They need to be matched between communicating nodes. Also, we may be being denied write access but we don't need write to do a restore.

So lets try:

# mount -F nfs -o vers=3 -r <NAS ip address>:<handle> /mnt

and try vers=4 and vers=5 too.

The -r means we will accept a readonly connection.

Try all combinations and see what happens.

Yes, some NAS can be configured to require a password but that is unlikely in this case if you didn't need one to mount for the dump.