Disks are not visible in Veritas Volume manager

Hi

I have created a new setup for VCS for doing some testing on virtual box. After creating 3 Solaris 10 machine (147148-26) with VCS. Here I have used one machine as ISCSI Storage.

VCS Version is

bash-3.2# /opt/VRTS/bin/haclus -value EngineVersion
6.0.10.0
bash-3.2#
bash-3.2# pkginfo -l VRTSvcs
   PKGINST:  VRTSvcs
      NAME:  Veritas Cluster Server by Symantec
  CATEGORY:  system
      ARCH:  i386
   VERSION:  6.0.100.000
   BASEDIR:  /
    VENDOR:  Symantec Corporation
      DESC:  Veritas Cluster Server by Symantec
    PSTAMP:  6.0.100.000-GA-2012-07-20-16.30.01
  INSTDATE:  Nov 06 2019 19:40
    STATUS:  completely installed
     FILES:      278 installed pathnames
                  26 shared pathnames
                  56 directories
                 116 executables
              466645 blocks used (approx)

bash-3.2#

First VCS Node info is

bash-3.2# echo |format
Searching for disks...
Inquiry failed for this logical diskdone


AVAILABLE DISK SELECTIONS:
       0. c0d0 <�-'x�-'�-'�-'�-'�-'�-'�-'�-'�-'@�-'�-'�-' cyl 5242 alt 2 hd 255 sec 63>
          /pci@0,0/pci-ide@1,1/ide@0/cmdk@0,0
       1. c2t600144F05DC281F100080027E84B7300d0 <SUN    -SOLARIS        -1   cyl 1021 alt 2 hd 64 sec 32>
          /scsi_vhci/disk@g600144f05dc281f100080027e84b7300
       2. c2t600144F05DC281FF00080027E84B7300d0 <SUN    -SOLARIS        -1   cyl 1021 alt 2 hd 64 sec 32>
          /scsi_vhci/disk@g600144f05dc281ff00080027e84b7300
       3. c2t600144F05DC2822B00080027E84B7300d0 <SUN    -SOLARIS        -1   cyl 1021 alt 2 hd 64 sec 32>
          /scsi_vhci/disk@g600144f05dc2822b00080027e84b7300
       4. c2t600144F05DC2823A00080027E84B7300d0 <SUN    -SOLARIS        -1   cyl 1021 alt 2 hd 64 sec 32>
          /scsi_vhci/disk@g600144f05dc2823a00080027e84b7300
       5. c2t600144F05DC2825E00080027E84B7300d0 <SUN    -SOLARIS        -1   cyl 1303 alt 2 hd 255 sec 63>
          /scsi_vhci/disk@g600144f05dc2825e00080027e84b7300
       6. c2t600144F05DC2821500080027E84B7300d0 <SUN    -SOLARIS        -1   cyl 1021 alt 2 hd 64 sec 32>
          /scsi_vhci/disk@g600144f05dc2821500080027e84b7300
       7. c2t600144F05DC2827000080027E84B7300d0 <SUN    -SOLARIS        -1   cyl 2608 alt 2 hd 255 sec 63>
          /scsi_vhci/disk@g600144f05dc2827000080027e84b7300
       8. c2t600144F05DC2820900080027E84B7300d0 <SUN    -SOLARIS        -1   cyl 1021 alt 2 hd 64 sec 32>
          /scsi_vhci/disk@g600144f05dc2820900080027e84b7300
       9. c2t600144F05DC2825400080027E84B7300d0 <SUN    -SOLARIS        -1   cyl 1303 alt 2 hd 255 sec 63>
          /scsi_vhci/disk@g600144f05dc2825400080027e84b7300
Specify disk (enter its number): Specify disk (enter its number):
bash-3.2#
bash-3.2# uname -a
SunOS node1 5.10 Generic_147148-26 i86pc i386 i86pc
bash-3.2#
bash-3.2# vxdisk -e list
DEVICE       TYPE           DISK        GROUP        STATUS               OS_NATIVE_NAME   ATTR
aluadisk0_0  auto:none      -            -           online invalid       c2t600144F05DC2823A00080027E84B7300d0s2 -
c0d0s2       auto:ZFS       -            -           ZFS                  c0d0s2           -
bash-3.2#

For another Node

bash-3.2# echo|format
Searching for disks...
Inquiry failed for this logical diskdone


AVAILABLE DISK SELECTIONS:
       0. c0d0 <SUN    -SOLARIS        -1   cyl 5242 alt 2 hd 255 sec 63>
          /pci@0,0/pci-ide@1,1/ide@0/cmdk@0,0
       1. c2t600144F05DC281F100080027E84B7300d0 <SUN    -SOLARIS        -1   cyl 1021 alt 2 hd 64 sec 32>
          /scsi_vhci/disk@g600144f05dc281f100080027e84b7300
       2. c2t600144F05DC281FF00080027E84B7300d0 <SUN    -SOLARIS        -1   cyl 1021 alt 2 hd 64 sec 32>
          /scsi_vhci/disk@g600144f05dc281ff00080027e84b7300
       3. c2t600144F05DC2822B00080027E84B7300d0 <SUN    -SOLARIS        -1   cyl 1021 alt 2 hd 64 sec 32>
          /scsi_vhci/disk@g600144f05dc2822b00080027e84b7300
       4. c2t600144F05DC2823A00080027E84B7300d0 <SUN    -SOLARIS        -1   cyl 1021 alt 2 hd 64 sec 32>
          /scsi_vhci/disk@g600144f05dc2823a00080027e84b7300
       5. c2t600144F05DC2825E00080027E84B7300d0 <SUN    -SOLARIS        -1   cyl 1303 alt 2 hd 255 sec 63>
          /scsi_vhci/disk@g600144f05dc2825e00080027e84b7300
       6. c2t600144F05DC2825400080027E84B7300d0 <SUN    -SOLARIS        -1   cyl 1303 alt 2 hd 255 sec 63>
          /scsi_vhci/disk@g600144f05dc2825400080027e84b7300
       7. c2t600144F05DC2827000080027E84B7300d0 <SUN    -SOLARIS        -1   cyl 2608 alt 2 hd 255 sec 63>
          /scsi_vhci/disk@g600144f05dc2827000080027e84b7300
       8. c2t600144F05DC2820900080027E84B7300d0 <SUN    -SOLARIS        -1   cyl 1021 alt 2 hd 64 sec 32>
          /scsi_vhci/disk@g600144f05dc2820900080027e84b7300
       9. c2t600144F05DC2821500080027E84B7300d0 <SUN    -SOLARIS        -1   cyl 1021 alt 2 hd 64 sec 32>
          /scsi_vhci/disk@g600144f05dc2821500080027e84b7300
Specify disk (enter its number): Specify disk (enter its number):
bash-3.2#
bash-3.2# vxdisk -e list
DEVICE       TYPE           DISK        GROUP        STATUS               OS_NATIVE_NAME   ATTR
aluadisk0_0  auto:none      -            -           online invalid       c2t600144F05DC281FF00080027E84B7300d0s2 -
c0d0s2       auto:ZFS       -            -           ZFS                  c0d0s2           -
bash-3.2#

One OS disk and one LUN is showing in vxdisk command. Further both LUNs ids are different on both nodes which are showing in vxdisk output. I have ran "vxdisk enable" , "vxdisk scandisks" , devfsadm and even taken all the reconfiguration reboot multiple time even then I didn't get all the disk are shown in vxdisk list command. How can I make all the disk visible in vxdisk list command

My initial thoughts are:

  1. Check cabling and disk jumpers for addressing conflicts.
  2. Check disk labels are compatible with VCS
  3. Check disk mode pages are not left in inconsistent state from previous use.

So (1) speaks for itself. (2) you could rewrite the disk labels to ensure they are compatible with VCS. They probably need to be Sun labels but check that out. (3) disks are highly programmable devices and the mode pages on them can lock a disk out from inquiry from any device other than the one it thinks it's locked to (as in a cluster failover). As you will know, only one node can read and write to any volume at one time otherwise corruption results. This can leave disks in a locked state so select the option "set all mode pages to default" to clear all settings.

So I would look at 1,2,& 3 first. Remember to do both 2 and/or 3 you need to run format in expert mode. By default, Solaris format doesn't offer such menu options. Add the -e switch:

# format -e

to run format in expert mode. You're telling Solaris that you're an expert so you'd better be one.

Hi Hicksd8

I think first point we can ignore (Check cabling and disk jumpers for addressing conflicts.) as I am doing in virtual box.

Further I have label all the disk to SMI label and also enable "set all mode pages to default" and also took the reconfiguration reboot and still the issue is same

On Node1

bash-3.2# vxdisk -e list
DEVICE       TYPE           DISK        GROUP        STATUS               OS_NATIVE_NAME   ATTR
aluadisk0_0  auto:none      -            -           online invalid       c2t600144F05DC2825400080027E84B7300d0s2 -       
c0d0s2       auto:ZFS       -            -           ZFS                  c0d0s2           -
bash-3.2#
bash-3.2#

On Node2

bash-3.2# vxdisk -e list
DEVICE       TYPE           DISK        GROUP        STATUS               OS_NATIVE_NAME   ATTR
aluadisk0_0  auto:none      -            -           online invalid       c2t600144F05DC2825400080027E84B7300d0s2 -
c0d0s2       auto:ZFS       -            -           ZFS                  c0d0s2           -
bash-3.2#

--- Post updated at 01:37 PM ---

Note : This time LUN id is same on both nodes when we run the

vxdisk -e list 

command and also this LUN id is different from previous two LUN ids in my previous output

--- Post updated at 01:50 PM ---

Just now I have noticed that after taking reconfiguration reboot LUN id show in vxdisk command output are different

On Node one

bash-3.2# vxdisk -e list
DEVICE       TYPE           DISK        GROUP        STATUS               OS_NATIVE_NAME   ATTR
aluadisk0_0  auto:none      -            -           online invalid       c2t600144F05DC2825E00080027E84B7300d0s2 -       
c0d0s2       auto:ZFS       -            -           ZFS                  c0d0s2           -
bash-3.2#

On Node Two

bash-3.2# vxdisk -e list
DEVICE       TYPE           DISK        GROUP        STATUS               OS_NATIVE_NAME   ATTR
aluadisk0_0  auto:none      -            -           online invalid       c2t600144F05DC281FF00080027E84B7300d0s2 -
c0d0s2       auto:ZFS       -            -           ZFS                  c0d0s2           -
bash-3.2#

--- Post updated at 02:11 PM ---

again took a reboot the one dick is from previous on one node and one disk is new on another node

On node one

bash-3.2# vxdisk -e list
DEVICE       TYPE           DISK        GROUP        STATUS               OS_NATIVE_NAME   ATTR
aluadisk0_0  auto:none      -            -           online invalid       c2t600144F05DC2825400080027E84B7300d0s2 -       
c0d0s2       auto:ZFS       -            -           ZFS                  c0d0s2           -
bash-3.2#

On another node

bash-3.2# vxdisk -e list
DEVICE       TYPE           DISK        GROUP        STATUS               OS_NATIVE_NAME   ATTR
aluadisk0_0  auto:none      -            -           online invalid       c2t600144F05DC281F100080027E84B7300d0s2 -
c0d0s2       auto:ZFS       -            -           ZFS                  c0d0s2           -
bash-3.2#

Perhaps some /dev/ links are missing (that point to the /devices/ paths)?

man devfsadm

Usually one can do

devfsadm -v -C -c disk

to rebuild the /dev/ links.

Hi

I have run the mention command still the issue persist

devfsadm -v -C -c disk
vxdisk enable
vxdisk scandisks

On my first node output is after running above commands

bash-3.2# vxdisk -e list
DEVICE       TYPE           DISK        GROUP        STATUS               OS_NATIVE_NAME   ATTR
aluadisk0_0  auto:none      -            -           online invalid       c2t600144F05DC2821500080027E84B7300d0s2 -          
c0d0s2       auto:ZFS       -            -           ZFS                  c0d0s2           -
bash-3.2#

On my second Node output is after running above commands

bash-3.2# vxdisk -e list
DEVICE       TYPE           DISK        GROUP        STATUS               OS_NATIVE_NAME   ATTR
aluadisk0_0  auto:none      -            -           online invalid       c2t600144F05DC2825400080027E84B7300d0s2 -
c0d0s2       auto:ZFS       -            -           ZFS                  c0d0s2           -
bash-3.2#