Moving file systems from one server to the other

Hi

I have a server running solaris 10 with UFS type file systems, residing on a NetApp storage system, in which I need to move those file systems (all of them) to another solaris 10 server.
The normal procedure to create a file system is to do a format , newfs , mkdir and mount . But on this case I cannot do newfs because I will be destryiong the data already on those LUNs or file systems.
Any hint in how to go about it

Well being professional the first thing you need to do before you mess with anything is to backup the lot. UFS filesystems are dumped using fssnap to take a snapshot (i.e. freeze) a filesystem which outputs a special device name. You then use that special device name to ufsdump the whole filesystem to backup (tape, external drive, whatever). Without doing that first should anything go seriously wrong you are stuffed. Your data is the most important thing!!!

Now, if you are saying that all filesystems are on a SAN then you should be able to get the storage boys to offer the LUNs to your new box. The main thing is whether that new box will boot from the root filesystem without error since I'm assuming that it's not identical hardware so different drivers might need to be loaded. That might take a few tricks. Also, the actual device nodes (e.g. c0t0d0s0) might be different but there are ways to get around that. You will need to manually update files like /etc/vfstab, /etc/system, etc, once the LUN's are swung across to the new box. This way you don't necessarily need to restore anything if you can get away with it.

Alternatively, you get your storage team to allocate new similar capacity LUN's for each and every filesystem, you install Solaris from installation media, and then restore each filesystem from its ufsdump file. You will still need to tackle the issues surrounding different hardware and incorrect drivers being restored.

HOWEVER, provided you have done the ufsdump backups for each and every filesystem, if anything goes bang you can recover. Just be totally professional and backup everything before you mess with it.

Hope that helps.

Hi
thanks for the reply, yes its identical hardware, the origin server is sparc T3-1b server, the destination server is also a sparc T3-1b server on the same chassis sunblade 6000 , they also have identical root disks and identical operating systems installed already solaris 10 . The NetApp admin, told me he can unmap the LUNs from the origin server, and map them into the new server. The dificulty is once the LUNs are seen on the destination server using the format command, I will not be able to newfs them because they data already...

Is there something that I'm not understanding here?????

If the hardware is identical, and you shutdown the old system in an orderly manner, the LUN's are remapped and seen by the new server, you should be able to just mount them. The data is already on there and coherent for each filesystem. Ensure /etc/vfstab is copied over correctly. You might need to edit the device node names (/dev/dsk, /dev/rdsk) should they change and/or recreate the device node names (using devfsadm ) but that's all. A simply remap from one machine to another shouldn't cause any filesystem damage, the UFS filesystems should be remountable on the new system.

Regardless though, backup first in case anything does go wrong. Be professional about it.

you did understand, everything, what I did not tell was the origin server have crash, so the reason for us to move the LUNs from that server to another

That should still be okay. Perhaps some or all of the filesystems will need 'fsck' run on them before they will mount. Other than that, should work, however, you cannot easily backup first so you might have to rely on the last backup before the crash. I assume that the system hasn't been up since then so little work will be lost, however, if you can fsck and mount the remapped LUN's, no work will be lost.

the first LUN mapped on the other server (destination) can now be seen using the format command:

bash-3.00# format
Searching for disks...done


AVAILABLE DISK SELECTIONS:
       0. c0t5000C5003A0028FFd0 <SUN300G cyl 46873 alt 2 hd 20 sec 625>
          /scsi_vhci/disk@g5000c5003a0028ff
       1. c0t5000C5003A034CFFd0 <SUN300G cyl 46873 alt 2 hd 20 sec 625>
          /scsi_vhci/disk@g5000c5003a034cff
       2. c9t4d0 <NETAPP-LUN-8020 cyl 5630 alt 2 hd 16 sec 256>
          /iscsi/disk@0000iqn.1992-08.com.netapp%3Asn.14224149003E8,0
Specify disk (enter its number): ^D
bash-3.00#

its the NetApp LUN.
From here should goto normal procedure of creating a file system wich is using partition, print[/ICODE], slice 6 , label that will result in a /dev/dsk/c9t4d0s6 .
Can I follow this procedure?

You should be able to select c9t4d0 in 'format' and print its partition (slices).

If you can do that then you can attempt to 'fsck' those filesystems,

for example:

# fsck /dev/rdsk/c9t4d0s0

If you re-slice, label and newfs you will wipe-out the data!!!!!

Try to just 'fsck' and then 'mount' the LUN.

If within 'format' you:

Select the NetApp Lun c9t4d0

Enter option 'p'

Enter option 'p' (again)

What output do you get? Does it output a slice table?

DON'T use 'format' to change anything on the disk at this time. Just 'Quit' out.

this is what I get:

bash-3.00# format
Searching for disks...done


AVAILABLE DISK SELECTIONS:
       0. c0t5000C5003A0028FFd0 <SUN300G cyl 46873 alt 2 hd 20 sec 625>
          /scsi_vhci/disk@g5000c5003a0028ff
       1. c0t5000C5003A034CFFd0 <SUN300G cyl 46873 alt 2 hd 20 sec 625>
          /scsi_vhci/disk@g5000c5003a034cff
       2. c9t4d0 <NETAPP-LUN-8020 cyl 5630 alt 2 hd 16 sec 256>
          /iscsi/disk@0000iqn.1992-08.com.netapp%3Asn.14224149003E8,0
Specify disk (enter its number): 2
selecting c9t4d0
[disk formatted]


FORMAT MENU:
        disk       - select a disk
        type       - select (define) a disk type
        partition  - select (define) a partition table
        current    - describe the current disk
        format     - format and analyze the disk
        repair     - repair a defective sector
        label      - write label to the disk
        analyze    - surface analysis
        defect     - defect list management
        backup     - search for backup labels
        verify     - read and display labels
        save       - save new disk/partition definitions
        inquiry    - show vendor, product and revision
        volname    - set 8-character volume name
        !<cmd>     - execute <cmd>, then return
        quit
format> p


PARTITION MENU:
        0      - change `0' partition
        1      - change `1' partition
        2      - change `2' partition
        3      - change `3' partition
        4      - change `4' partition
        5      - change `5' partition
        6      - change `6' partition
        7      - change `7' partition
        select - select a predefined table
        modify - modify a predefined partition table
        name   - name the current table
        print  - display the current table
        label  - write partition map and label to the disk
        !<cmd> - execute <cmd>, then return
        quit
partition>

partition> p
Current partition table (original):
Total disk cylinders available: 5630 + 2 (reserved cylinders)

Part      Tag    Flag     Cylinders        Size            Blocks
  0       root    wm       0 -   63      128.00MB    (64/0/0)     262144
  1       swap    wu      64 -  127      128.00MB    (64/0/0)     262144
  2     backup    wu       0 - 5629       11.00GB    (5630/0/0) 23060480
  3 unassigned    wm       0               0         (0/0/0)           0
  4 unassigned    wm       0               0         (0/0/0)           0
  5 unassigned    wm       0               0         (0/0/0)           0
  6        usr    wm     128 - 5629       10.75GB    (5502/0/0) 22536192
  7 unassigned    wm       0               0         (0/0/0)           0

partition>

------ Post updated at 11:20 AM ------

I beleive should be /dev/dsk/c9t4d0s6 ....

Yes, well that proves that the LUN is intact and perfectly readable.

Only you will know what these slices are. You should have the relevant filesystem locations documented. However, yes, slice 6 is the largest and certainly looks relevant so I would attempt to 'fsck' that device:

# fsck -n /dev/rdsk/c9t4d0s6

As always use the -n (no changes to be made) switch initially to explore what damage, if any, there is. If you only get a few problems flagged then run again without the -n and correct them when asked.

1 Like

Slice 2 on Solaris always represents the whole disk and you should NEVER try to do anything with that.

1 Like

worked perfectely:

bash-3.00# fsck -n /dev/rdsk/c9t4d0s6
** /dev/rdsk/c9t4d0s6 (NO WRITE)
** Last Mounted on /fs1
** Phase 1 - Check Blocks and Sizes
** Phase 2 - Check Pathnames
** Phase 3a - Check Connectivity
** Phase 3b - Verify Shadows/ACLs
** Phase 4 - Check Reference Counts
** Phase 5 - Check Cylinder Groups
277 files, 16163 used, 11069777 free (185 frags, 1383699 blocks, 0.0% fragmentation)
bash-3.00#
bash-3.00# mount /dev/dsk/c9t4d0s6 /data1
bash-3.00#
/dev/md/dsk/d50        9.8G   378M   9.4G     4%    /opt
/dev/md/dsk/d60        112G   6.1G   105G     6%    /internaldisk
/dev/dsk/c9t4d0s6       11G    27M    10G     1%    /data1
bash-3.00# ls -lrt /data1
total 88
drwx------   2 root     root        8192 Oct 24  2012 lost+found
-rwxr-xr-x   1 nagios   103         7306 Jul 11  2017 fonseca1.sh
drwxrwxrwx   2 nagios   103          512 Feb 20 09:06 AG
drwsrwsrwt   4 nagios   103        27648 May 22 16:11 danilo
bash-3.00#

thank you very much,
I now going to do for the rest of it

Thank you again

You are welcome.

Be aware that when you try to mount c9t4d0s6 is might complain that the device (e.g. /dev/dsk/c9t4d0s6) does not exist on the root filesystem you are booted from. If so we'll need to get the system to create the required device nodes, which we can do later.

You might also need to create mount points (i.e. top level directories) using mkdir if they are also not there already.

It all depends on whether your root disk is or is not an exactly copy from the old system. Let's see how it goes.

EDIT: Okay I see that you already successfully mounted c9t4d0s6. Looking good!

miss that, should I just create a directory in /dev , called /dsk

No, DON'T create ANYTHING unless it complains that something isn't there.

Post here if you get any error messages that stuff is missing.

1 Like

Hi Fretagi,

The LUN will have come over complete with all the information, there is no need to format or newfs the device. All that will be required is to ensure that the /etc/vfstab is correct, if the information is not available from the old server then you could mount the slices one at a time and examine the contents.

Having read the full thread, I now see that you have managed to mount up the disk. As to your point on the /dev/dsk structure, the devfsadm command will do that for you.

Regards

Gull04

If you have enough space in your internal disks, create a ZFS file system with a file. (Or slice if you have available)
# mkfile 100m file1 (i.e.100mb)
# zpool create vlkpool /file1
Copy your files in UFS to vlkpool
Use zfs send/receive to send vlkpool to destination solaris10 server