How to identify if disk is attached to SAN and assist in migration.?

I am working on VM host and collecting data to identify the type of storage attached to the server which will be migrated to VNX.

it has one ldom created on it

luxadm probe output ---

No Network Array enclosures found in /dev/es

Found Fibre Channel device(s):
  Node WWN:5006048452a63ec9  Device Type:Disk device
    Logical Path:/dev/rdsk/c3t60060480000290101499533031393942d0s2
  Node WWN:5006048452a63ec9  Device Type:Disk device
    Logical Path:/dev/rdsk/c3t60060480000290101499533031303738d0s2
  Node WWN:5006048452a63ec9  Device Type:Disk device
    Logical Path:/dev/rdsk/c3t60060480000290101499533031383245d0s2
  Node WWN:5006048452a63ec9  Device Type:Disk device
    Logical Path:/dev/rdsk/c3t60060480000290101499533030373937d0s2
  Node WWN:5006048452a63ec9  Device Type:Disk device
    Logical Path:/dev/rdsk/c3t60060480000290101499533030463745d0s2

df -k output ------

df -k
Filesystem            kbytes    used   avail capacity  Mounted on
/dev/dsk/c1t0d0s0    15490539 5582761 9752873    37%    /
/devices                   0       0       0     0%    /devices
ctfs                       0       0       0     0%    /system/contract
proc                       0       0       0     0%    /proc
mnttab                     0       0       0     0%    /etc/mnttab
swap                 33497072    1552 33495520     1%    /etc/svc/volatile
objfs                      0       0       0     0%    /system/object
sharefs                    0       0       0     0%    /etc/dfs/sharetab
/platform/SUNW,SPARC-Enterprise-T2000/lib/libc_psr/libc_psr_hwcap1.so.1
                     15490539 5582761 9752873    37%    /platform/sun4v/lib/libc_psr.so.1
/platform/SUNW,SPARC-Enterprise-T2000/lib/sparcv9/libc_psr/libc_psr_hwcap1.so.1
                     15490539 5582761 9752873    37%    /platform/sun4v/lib/sparcv9/libc_psr.so.1
fd                         0       0       0     0%    /dev/fd
/dev/dsk/c1t0d0s3    10326908 4680729 5542910    46%    /var
swap                 33498728    3208 33495520     1%    /tmp
swap                 33495568      48 33495520     1%    /var/run
/dev/dsk/c1t0d0s4    5163446  199027 4912785     4%    /home
/dev/dsk/c1t0d0s5    6089342 1049125 4979324    18%    /var/crash

echo |format ------------------------------

Searching for disks...done

c3t60060480000290101499533030383033d0: configured with capacity of 8.43GB
c3t60060480000290101499533031393942d0: configured with capacity of 42.14GB
c3t60060480000290101499533030373937d0: configured with capacity of 42.14GB


AVAILABLE DISK SELECTIONS:
       0. c1t0d0 <LSILOGIC-LogicalVolume-3000 cyl 65533 alt 2 hd 16 sec 136>
          /pci@780/pci@0/pci@9/scsi@0/sd@0,0
       1. c3t60060480000290101499533031333046d0 <EMC-SYMMETRIX-5772 cyl 9205 alt 2 hd 15 sec 128>
          /scsi_vhci/ssd@g60060480000290101499533031333046
       2. c3t60060480000290101499533031333430d0 <EMC-SYMMETRIX-5772 cyl 46033 alt 2 hd 15 sec 128>
          /scsi_vhci/ssd@g60060480000290101499533031333430
       3. c3t60060480000290101499533031333336d0 <EMC-SYMMETRIX-5772 cyl 46033 alt 2 hd 15 sec 128>
          /scsi_vhci/ssd@g60060480000290101499533031333336
       4. c3t60060480000290101499533031333342d0 <EMC-SYMMETRIX-5772 cyl 46033 alt 2 hd 15 sec 128>
          /scsi_vhci/ssd@g60060480000290101499533031333342
       5. c3t60060480000290101499533031303035d0 <EMC-SYMMETRIX-5772 cyl 46033 alt 2 hd 15 sec 128>
          /scsi_vhci/ssd@g60060480000290101499533031303035
       6. c3t60060480000290101499533031303041d0 <EMC-SYMMETRIX-5772 cyl 46033 alt 2 hd 15 sec 128>
          /scsi_vhci/ssd@g60060480000290101499533031303041

I need to understand if the above disk are being used by the OS. Please provide your inputs. I have never worked on solaris before, have good understanding of Hpux , but this looks entirely different than hpux

Solaris to EMC Storage connectivity is a big subject, so much so that EMC issued a book on it. Whilst fully respecting EMC's Copyright to this material, I uploaded a copy to this forum over 5 years ago. You can find it attached to this thread:

If you are well experienced with Unix/HP-UX this might be all you need along with a good supply of coffee.

2 Likes

Thanks hicksd8. To be frank I am not good with solaris at all, I really dont understand the basic outputs like the ones posted above. I am not able to understand if the disks are used anywhere . Is there a simple way to know where is the disk used, like for which filesystem.

Ones this report is prepared there is mammoth task of migrating it to VNX. which I have no clue off!

Thanks again.

Is this Solaris 10 or 11?

Do you know whether the filesystems are UFS or ZFS?

Looking at file /etc/vfstab will tell you what non-ZFS filesystems the O/S is going to mount on which mount points at boot time.

---------- Post updated at 05:35 PM ---------- Previous update was at 04:42 PM ----------

Run 'luxadm display' against each of the 'luxadm probe' outputs; e.g.

# luxadm display /dev/rdsk/c3t60060480000290101499533031393942d0s2
1 Like
DEVICE PROPERTIES for disk: /dev/rdsk/c3t60060480000290101499533030374535d0s2
  Vendor:               EMC
  Product ID:           SYMMETRIX
  Revision:             5773
  Serial Num:           1014997E5000
  Unformatted capacity: 43157.812 MBytes
  Read Cache:           Enabled
    Minimum prefetch:   0x0
    Maximum prefetch:   0xffff
  Device Type:          Disk device
  Path(s):

  /dev/rdsk/c3t60060480000290101499533030374535d0s2
  /devices/scsi_vhci/ssd@g60060480000290101499533030374535:c,raw
   Controller           /devices/pci@780/pci@0/pci@8/SUNW,qlc@0/fp@0,0
    Device Address              5006048452a63ec9,60
    Host controller port WWN    2100001b3212ffda
    Class                       primary
    State                       ONLINE
   Controller           /devices/pci@7c0/pci@0/pci@9/SUNW,qlc@0/fp@0,0
    Device Address              5006048452a63ec6,60
    Host controller port WWN    2100001b3212ccdb
    Class                       primary
    State                       ONLINE


The above command show to paths from two different HBA,s it is similar to other devices as well.

This is a VM host the vfstab output is

cat /etc/vfstab
#device         device          mount           FS      fsck    mount   mount
#to mount       to fsck         point           type    pass    at boot options
#
fd      -       /dev/fd fd      -       no      -
/proc   -       /proc   proc    -       no      -
/dev/dsk/c1t0d0s1       -       -       swap    -       no      -
/dev/dsk/c1t0d0s0       /dev/rdsk/c1t0d0s0      /       ufs     1       no      logging
/dev/dsk/c1t0d0s3       /dev/rdsk/c1t0d0s3      /var    ufs     1       no      logging
/dev/dsk/c1t0d0s4       /dev/rdsk/c1t0d0s4      /home   ufs     2       yes     logging
/dev/dsk/c1t0d0s5       /dev/rdsk/c1t0d0s5      /var/crash      ufs     2       yes     logging
/devices        -       /devices        devfs   -       no      -
sharefs -       /etc/dfs/sharetab       sharefs -       no      -
ctfs    -       /system/contract        ctfs    -       no      -
objfs   -       /system/object  objfs   -       no      -
swap    -       /tmp    tmpfs   -       yes     -

I am planning to do online storage migration following below steps, it has one ldom.

  1. Remove 1 fc
  2. Assign the luns from the new storage
  3. scan the devices
  4. Mirror it ( dont know the command )
  5. Break the mirror and remove the old storage lun
  6. attach the 2nd fc.

Will this work ?

your vfstab does not show mountpoints for this LUNs. it might be another filesystem like zfs. do you see an output issuing "zpool status"?

This is a vmhost and the host has veritas cluster on it. I need to understand how can I migrate it to the new storage.

root@vmhost> ldm list-services |more
VCC
    NAME             LDOM             PORT-RANGE
    primary-vcc0     primary          5000-5031

VSW
    NAME             LDOM             MAC               NET-DEV   DEVICE     DEFAULT-VLAN-ID PVID VID                  MODE
    vlan090-vsw0     primary          00:14:4f:fa:97:6e e1000g3   switch@2   1               1
    vlan090-vsw1     primary          00:14:4f:fb:e3:e8 nxge3     switch@3   1               1
    private-vsw0     primary          00:14:4f:fa:b2:4d e1000g2   switch@4   1               1
    private-vsw1     primary          00:14:4f:fa:b8:8e nxge2     switch@5   1               1
    vlan050-vsw0     primary          00:14:4f:fa:23:5e e1000g0   switch@0   1               1
    vlan050-vsw1     primary          00:14:4f:f9:a8:ac nxge0     switch@1   1               1

VDS
    NAME             LDOM             VOLUME         OPTIONS          MPGROUP        DEVICE
    primary-vds0     primary          lb001-server1                                 /dev/rdsk/c3t60060480000290101499533031303738d0s
2
                                      la050-jetstart                                 /dev/rdsk/c3t60060480000290101499533031303030d0s2
                                      la051-jetstart                                 /dev/rdsk/c3t60060480000290101499533031303035d0s2
                                      la052-jetstart                                 /dev/rdsk/c3t60060480000290101499533031303041d0s2
                                      la061-slavedns                                 /dev/rdsk/c3t60060480000290101499533031333135d0s2
                                      la056-masterdns                                 /dev/rdsk/c3t60060480000290101499533031333044d0s2
                                      la078-sunadmin                                 /dev/rdsk/c3t60060480000290101499533031333435d0s2
                                      la064-sunadmin                                 /dev/rdsk/c3t60060480000290101499533031333141d0s2

Identify the new luns attached to hypervisor (echo | format ).
Label the new luns and partition them per your choosing or requirements via format command.

First, you assign new luns to hypervisors virtual disk service (vds).
In your case that is primary-vds0

ldm add-vdsdev /dev/rdsk/.... <name>@primary-vds0

Check the current configuration of virtual machine :

ldm list -l <name of virtual machine, returned from ldm list, not primary>

Then, add the disk to virtual machine (ldom), where id is optional, it will choose next number available if not specified.

ldm add-vdisk [id=N] <name> <name>@primary-vds0 <name of virtual machine>

After that, login to that virtual machine via telnet from hypervisor or network, check dmesg for devices created at the time, and format output.
If you specified the ID of of new disk(s), those will be (probably) c1d<ID>s2 in the LDOM or the next available number.
Either c1, c2.. depending on the devices present during initial install.

Work is now being done inside the virtual machine, you must determine which volume manager is used (vx, zfs, ufs ?).
continue by documentation regarding that subject taking into consideration that this is a clustered environment.
Probably the disks will need to be added to other members of the cluster.

This is not a trivial task, so test it first in some test environment, if you are unsure of the outcome.

Hope that helps
Regards
Peasant.