VG and LV problem

Hi all,

I have a client who has an RX4640 using 11iV2 , he said the server has problems.

I checked and I couldn't activate the VGs.

I am somehow new to HP-UX and I don't want to do something that would mess everything up. I am not sure if that is what is really happening, but from what I understood from the outputs below, the physical volumes are not associated to the VGs (excuse my HP-UX newbie grammar).

Below you may find the output of some of the commands.
Any help would be appreciated in order to identify and solve the problem.
Have a good day everyone!

LUNHINGA:[/]#vgdisplay -v
--- Volume groups ---
VG Name                     /dev/vg00
VG Write Access             read/write
VG Status                   available
Max LV                      255
Cur LV                      9
Open LV                     9
Max PV                      16
Cur PV                      2
Act PV                      2
Max PE per PV               4238
VGDA                        4
PE Size (Mbytes)            8
Total PE                    8456
Alloc PE                    8384
Free PE                     72
Total PVG                   0
Total Spare PVs             0
Total Spare PVs in use      0

   --- Logical volumes ---
   LV Name                     /dev/vg00/lvol1
   LV Status                   available/syncd
   LV Size (Mbytes)            304
   Current LE                  38
   Allocated PE                76
   Used PV                     2

   LV Name                     /dev/vg00/lvol2
   LV Status                   available/syncd
   LV Size (Mbytes)            4096
   Current LE                  512
   Allocated PE                1024
   Used PV                     2

   LV Name                     /dev/vg00/lvol3
   LV Status                   available/syncd
   LV Size (Mbytes)            520
   Current LE                  65
   Allocated PE                130
   Used PV                     2

   LV Name                     /dev/vg00/lvol4
   LV Status                   available/syncd
   LV Size (Mbytes)            2048
   Current LE                  256
   Allocated PE                512
   Used PV                     2

   LV Name                     /dev/vg00/lvol5
   LV Status                   available/syncd
   LV Size (Mbytes)            96
   Current LE                  12
   Allocated PE                24
   Used PV                     2

   LV Name                     /dev/vg00/lvol6
   LV Status                   available/syncd
   LV Size (Mbytes)            4912
   Current LE                  614
   Allocated PE                1228
   Used PV                     2

   LV Name                     /dev/vg00/lvol7
   LV Status                   available/syncd
   LV Size (Mbytes)            6336
   Current LE                  792
   Allocated PE                1584
   Used PV                     2

   LV Name                     /dev/vg00/lvol8
   LV Status                   available/syncd
   LV Size (Mbytes)            4984
   Current LE                  623
   Allocated PE                1246
   Used PV                     2

   LV Name                     /dev/vg00/lvolswap
   LV Status                   available/syncd
   LV Size (Mbytes)            10240
   Current LE                  1280
   Allocated PE                2560
   Used PV                     2


   --- Physical volumes ---
   PV Name                     /dev/dsk/c2t1d0s2
   PV Status                   available
   Total PE                    4228
   Free PE                     36
   Autoswitch                  On

   PV Name                     /dev/dsk/c2t0d0s2
   PV Status                   available
   Total PE                    4228
   Free PE                     36
   Autoswitch                  On


vgdisplay: Volume group not activated.
vgdisplay: Cannot display volume group "/dev/vgdvqatr".
vgdisplay: Volume group not activated.
vgdisplay: Cannot display volume group "/dev/vglunsap".
vgdisplay: Volume group not activated.
vgdisplay: Cannot display volume group "/dev/vgr3dev0".
vgdisplay: Volume group not activated.
vgdisplay: Cannot display volume group "/dev/vgr3qas0".
vgdisplay: Volume group not activated.
vgdisplay: Cannot display volume group "/dev/vgr3trn0".
vgdisplay: Volume group not activated.
vgdisplay: Cannot display volume group "/dev/vgr3trn1".
vgdisplay: Volume group not activated.
vgdisplay: Cannot display volume group "/dev/vgr3dev1".
vgdisplay: Volume group not activated.
vgdisplay: Cannot display volume group "/dev/vgr3qas1".
LUNHINGA:[/]#bdf
Filesystem          kbytes    used   avail %used Mounted on
/dev/vg00/lvol3     532480  307432  223320   58% /
/dev/vg00/lvol1     311296  181656  128656   59% /stand
/dev/vg00/lvol8    5103616 3449720 1641880   68% /var
/dev/vg00/lvol7    6488064 2562616 3894800   40% /usr
/dev/vg00/lvol4    2097152  852112 1236224   41% /tmp
/dev/vg00/lvol6    5029888 3606800 1411992   72% /opt
/dev/vg00/lvol5      98304   93880    4424   95% /home
LUNHINGA:[/]#vgchange -a y
Volume group "/dev/vg00" has been successfully changed.
vgchange: Warning: Couldn't attach to the volume group physical volume "/dev/dsk/c8t0d4":
Cross-device link
vgchange: Cross-device link
vgchange: Cross-device link
vgchange: Cross-device link
vgchange: Warning: couldn't query physical volume "/dev/dsk/c8t0d4":
The specified path does not correspond to physical volume attached to
this volume group
vgchange: Warning: couldn't query physical volume "/dev/dsk/c9t0d4":
The specified path does not correspond to physical volume attached to
this volume group
vgchange: Warning: couldn't query physical volume "/dev/dsk/c10t0d4":
The specified path does not correspond to physical volume attached to
this volume group
vgchange: Warning: couldn't query physical volume "/dev/dsk/c11t0d4":
The specified path does not correspond to physical volume attached to
this volume group
vgchange: Warning: couldn't query all of the physical volumes.
vgchange: Couldn't activate volume group "/dev/vgdvqatr":
Quorum not present, or some physical volume(s) are missing.

vgchange: Warning: Couldn't attach to the volume group physical volume "/dev/dsk/c8t0d5":
The HP-UX kernel running on this system does not provide this feature.
Install the appropriate kernel patch to enable it.

vgchange: The HP-UX kernel running on this system does not provide this feature.
Install the appropriate kernel patch to
 enable it.

vgchange: The HP-UX kernel running on this system does not provide this feature.
Install the appropriate kernel patch to enable it.

vgchange: The HP-UX kernel running on this system does not provide this feature.
Install the appropriate kernel patch to enable it.

vgchange: Warning: couldn't query physical volume "/dev/dsk/c8t0d5":
The specified path does not correspond to physical volume attached to
this volume group
vgchange: Warning: couldn't query physical volume "/dev/dsk/c9t0d5":
The specified path does not correspond to physical volume attached to
this volume group
vgchange: Warning: couldn't query physical volume "/dev/dsk/c10t0d5":
The specified path does not correspond to physical volume attached to
this volume group
vgchange: Warning: couldn't query physical volume "/dev/dsk/c11t0d5":
The specified path does not correspond to physical volume attached to
this volume group
vgchange: Warning: couldn't query all of the physical volumes.
vgchange: Couldn't activate volume group "/dev/vglunsap":
Quorum not present, or some physical volume(s) are missing.

vgchange: Warning: Couldn't attach to the volume group physical volume "/dev/dsk/c8t0d1":
Cross-device link
vgchange: Cross-device link
vgchange: Cross-device link
vgchange: Cross-device link
vgchange: Warning: couldn't query physical volume "/dev/dsk/c8t0d1":
The specified path does not correspond to physical volume attached to
this volume group
vgchange: Warning: couldn't query physical volume "/dev/dsk/c9t0d1":
The specified path does not correspond to physical volume attached to
this volume group
vgchange: Warning: couldn't query physical volume "/dev/dsk/c10t0d1":
The specified path does not correspond to physical volume attached to
this volume group
vgchange: Warning: couldn't query physical volume "/dev/dsk/c11t0d1":
The specified path does not correspond to physical volume attached to
this volume group
vgchange: Warning: couldn't query all of the physical volumes.
vgchange: Couldn't activate volume group "/dev/vgr3dev0":
Quorum not present, or some physical volume(s) are missing.


vgchange: Warning: Couldn't attach to the volume group physical volume "/dev/dsk/c8t0d2":
Cross-device link
vgchange: Cross-device link
vgchange: Cross-device link
vgchange: Cross-device link
vgchange: Warning: couldn't query physical volume "/dev/dsk/c8t0d2":
The specified path does not correspond to physical volume attached to
this volume group
vgchange: Warning: couldn't query physical volume "/dev/dsk/c9t0d2":
The specified path does not correspond to physical volume attached to
this volume group
vgchange: Warning: couldn't query physical volume "/dev/dsk/c10t0d2":
The specified path does not correspond to physical volume attached to
this volume group
vgchange: Warning: couldn't query physical volume "/dev/dsk/c11t0d2":
The specified path does not correspond to physical volume attached to
this volume group
vgchange: Warning: couldn't query all of the physical volumes.
vgchange: Couldn't activate volume group "/dev/vgr3qas0":
Quorum not present, or some physical volume(s) are missing.


vgchange: Warning: Couldn't attach to the volume group physical volume "/dev/dsk/c8t0d3":
Cross-device link
vgchange: Cross-device link
vgchange: Cross-device link
vgchange: Cross-device link
vgchange: Warning: couldn't query physical volume "/dev/dsk/c8t0d3":
The specified path does not correspond to physical volume attached to
this volume group
vgchange: Warning: couldn't query physical volume "/dev/dsk/c9t0d3":
The specified path does not correspond to physical volume attached to
this volume group
vgchange: Warning: couldn't query physical volume "/dev/dsk/c10t0d3":
The specified path does not correspond to physical volume attached to
this volume group
vgchange: Warning: couldn't query physical volume "/dev/dsk/c11t0d3":
The specified path does not correspond to physical volume attached to
this volume group
vgchange: Warning: couldn't query all of the physical volumes.
vgchange: Couldn't activate volume group "/dev/vgr3trn0":
Quorum not present, or some physical volume(s) are missing.

vgchange: Warning: Couldn't attach to the volume group physical volume "/dev/dsk/c8t1d0":
The HP-UX kernel running on this system does not provide this feature.
Install the appropriate kernel patch to enable it.

vgchange: The HP-UX kernel running on this system does not provide this feature.
Install the appropriate kernel patch to enable it.

vgchange: The HP-UX kernel running on this system does not provide this feature.
Install the appropriate kernel patch to enable it.

vgchange: The HP-UX kernel running on th
is system does not provide this feature.
Install the appropriate kernel patch to enable it.

vgchange: Warning: couldn't query physical volume "/dev/dsk/c8t1d0":
The specified path does not correspond to physical volume attached to
this volume group
vgchange: Warning: couldn't query physical volume "/dev/dsk/c9t1d0":
The specified path does not correspond to physical volume attached to
this volume group
vgchange: Warning: couldn't query physical volume "/dev/dsk/c10t1d0":
The specified path does not correspond to physical volume attached to
this volume group
vgchange: Warning: couldn't query physical volume "/dev/dsk/c11t1d0":
The specified path does not correspond to physical volume attached to
this volume group
vgchange: Warning: couldn't query all of the physical volumes.
vgchange: Couldn't activate volume group "/dev/vgr3trn1":
Quorum not present, or some physical volume(s) are missing.

vgchange: Warning: Couldn't attach to the volume group physical volume "/dev/dsk/c8t0d6":
The HP-UX kernel running on this system does not provide this feature.
Install the appropriate kernel patch to enable it.

vgchange: The HP-UX kernel running on this system does not provide this feature.
Install the appropriate kernel patch to enable it.

vgchange: The HP-UX kernel running on this system does not provide this feature.
Install the appropriate kernel patch to enable it.

vgchange: The HP-UX kernel running on this system does not provide this feature.
Install the appropriate kernel patch to enable it.

vgchange: Warning: couldn't query physical volume "/dev/dsk/c8t0d6":
The specified path does not correspond to physical volume attached to
this volume group
vgchange: Warning: couldn't query physical volume "/dev/dsk/c9t0d6":
The specified path does not correspond to physical volume attached to
this volume group
vgchange: Warning: couldn't query physical volume "/dev/dsk/c10t0d6":
The specified path does not correspond to physical volume attached to
this volume group
vgchange: Warning: couldn't query physical volume "/dev/dsk/c11t0d6":
The specified path does not correspond to physical volume attached to
this volume group
vgchange: Warning: couldn't query all of the physical volumes.
vgchange: Couldn't activate volume group "/dev/vgr3dev1":
Quorum not present, or some physical volume(s) are missing.


vgchange: Warning: Couldn't attach to the volume group physical volume "/dev/dsk/c8t1d1":
The HP-UX kernel running on this system does not provide this feature.
Install the appropriate kernel patch to enable it.

vgchange: The HP-UX kernel running on this system does not provide this feature.
Install the appropriate kernel patch to enable it.

vgchange: The HP-UX kernel running on this system does not provide this feature.
Install the appropriate kernel patch to enable it.

vgchange: The HP-UX kernel running on this system does not provide this feature.
Install the appropriate kernel patch to enable it.

vgchange: Warning: couldn't query physical volume "/dev/dsk/c8t1d1":
The specified path does not correspond to physical volume attached to
this volume group
vgchange: Warning: couldn't query physical volume "/dev/dsk/c9t1d1":
The specified path does not correspond to physical volume attached to
this volume group
vgchange: Warning: couldn't query physical volume "/dev/dsk/c10t1d1":
The specified path does not correspond to physical volume attached to
this volume group
vgchange: Warning: couldn't query physical volume "/dev/dsk/c11t1d1":
The specified path does not correspond to physical volume attached to
this volume group
vgchange: Warning: couldn't query all of the physical volumes.
vgchange: Couldn't activate volume group "/dev/vgr3qas1":
Quorum not present, or some physical volume(s) are missing.
             LUNHINGA:[/]#ioscan -fnC disk
Class     I  H/W Path       Driver     S/W State   H/W Type     Description
============================================================================
disk      0  0/0/3/0.0.0.0  sdisk      CLAIMED     DEVICE       TEAC    DV-28E-N
                           /dev/dsk/c0t0d0   /dev/rdsk/c0t0d0
disk      1  0/1/1/0.0.0    sdisk      CLAIMED     DEVICE       HP 36.4GST336754LC
                           /dev/dsk/c2t0d0     /dev/rdsk/c2t0d0
                           /dev/dsk/c2t0d0s1   /dev/rdsk/c2t0d0s1
                           /dev/dsk/c2t0d0s2   /dev/rdsk/c2t0d0s2
                           /dev/dsk/c2t0d0s3   /dev/rdsk/c2t0d0s3
disk      2  0/1/1/0.1.0    sdisk      CLAIMED     DEVICE       HP 36.4GST336754LC
                           /dev/dsk/c2t1d0     /dev/rdsk/c2t1d0
                           /dev/dsk/c2t1d0s1   /dev/rdsk/c2t1d0s1
                           /dev/dsk/c2t1d0s2   /dev/rdsk/c2t1d0s2
                           /dev/dsk/c2t1d0s3   /dev/rdsk/c2t1d0s3
disk      8  0/2/1/0.1.0.0.0.0.1  sdisk      CLAIMED     DEVICE       HP      HSV200
                           /dev/dsk/c8t0d1   /dev/rdsk/c8t0d1
disk     11  0/2/1/0.1.0.0.0.0.2  sdisk      CLAIMED     DEVICE       HP      HSV200
                           /dev/dsk/c8t0d2   /dev/rdsk/c8t0d2
disk     13  0/2/1/0.1.0.0.0.0.3  sdisk      CLAIMED     DEVICE       HP      HSV200
                           /dev/dsk/c8t0d3   /dev/rdsk/c8t0d3
disk     14  0/2/1/0.1.0.0.0.0.4  sdisk      CLAIMED     DEVICE       HP      HSV200
                           /dev/dsk/c8t0d4   /dev/rdsk/c8t0d4
disk     10  0/2/1/0.1.1.0.0.0.1  sdisk      CLAIMED     DEVICE       HP      HSV200
                           /dev/dsk/c9t0d1   /dev/rdsk/c9t0d1
disk     12  0/2/1/0.1.1.0.0.0.2  sdisk      CLAIMED     DEVICE       HP      HSV200
                           /dev/dsk/c9t0d2   /dev/rdsk/c9t0d2
disk     15  0/2/1/0.1.1.0.0.0.3  sdisk      CLAIMED     DEVICE       HP      HSV200
                           /dev/dsk/c9t0d3   /dev/rdsk/c9t0d3
disk     26  0/2/1/0.1.1.0.0.0.4  sdisk      CLAIMED     DEVICE       HP      HSV200
                           /dev/dsk/c9t0d4   /dev/rdsk/c9t0d4
disk      7  0/4/2/0.2.0.0.0.0.1  sdisk      CLAIMED     DEVICE       HP      HSV200
                           /dev/dsk/c10t0d1   /dev/rdsk/c10t0d1
disk     16  0/4/2/0.2.0.0.0.0.2  sdisk      CLAIMED     DEVICE       HP      HSV200
                           /dev/dsk/c10t0d2   /dev/rdsk/c10t0d2
disk     18  0/4/2/0.2.0.0.0.0.3  sdisk      CLAIMED     DEVICE       HP      HSV200
                           /dev/dsk/c10t0d3   /dev/rdsk/c10t0d3
disk     19  0/4/2/0.2.0.0.0.0.4  sdisk      CLAIMED     DEVICE       HP      HSV200
                           /dev/dsk/c10t0d4   /dev/rdsk/c10t0d4
disk      9  0/4/2/0.2.1.0.0.0.1  sdisk      CLAIMED     DEVICE       HP      HSV200
                           /dev/dsk/c11t0d1   /dev/rdsk/c11t0d1
disk     25  0/4/2/0.2.1.0.0.0.2  sdisk      CLAIMED     DEVICE       HP      HSV200
                           /dev/dsk/c11t0d2   /dev/rdsk/c11t0d2
disk     17  0/4/2/0.2.1.0.0.0.3  sdisk      CLAIMED     DEVICE       HP      HSV200
                           /dev/dsk/c11t0d3   /dev/rdsk/c11t0d3
disk     20  0/4/2/0.2.1.0.0.0.4  sdisk      CLAIMED     DEVICE       HP      HSV200
                           /dev/dsk/c11t0d4   /dev/rdsk/c11t0d4

You have error messages for 32 different non-system discs and ioscan only shows 16 non-system discs.
First impression is to get an engineer to check the hardware.

I agree with methyl.. unless you tell us more on the servers configuration, we cannot say more...
e.g Are you on a SAN ? with 2 HBA? using alternate path (doesnt seem the case...)

You did not try to use STM?

If the system has been cold started recently it might have been powered up in the wrong order or with a disc cabinet missing.
Anyway you have errors for all 16 non-system discs mentioned in ioscan and 16 more discs which are not mentioned in ioscan.

Is this system in a MCServiceguard cluster?

You still did not explain what about the configuration/architecture of this server: Are the disks on SAN? if not how are they connected?
From the little I see is RAID 5 : Who is doing the RAID? Is it an external subsystem or a controller?
If its external, I saw cases where there were disks failure but since no one was on site no one noticed the allarms till the subsystem crashe for already used all his spares and yet another disk failed...

You should have a /etc/fstab file to say what and how Filesystems should be mounted
Then look at /etc/lvmrc to see how AUTO_VG_ACTIVATE was set.

When physical discs are formatted for LVM they are given a unique identity. This error message is usually because the unix device path points to a physical disc which is not the one expected. You have this error for all the non-system discs.

Until you can see all 32 discs in ioscan with their correct device addresses there is nothing sensible you can do with unix LVM commands.
If you have two SANs, is one of them dead?

We are assuming throughout that this is a standalone system which was previously working and suddenly failed, and which has no shared disc paths to another server. (I think that vbe has this idea in post #6).

@ vbe, they are on EVA storage which does the Raid 5.
It's a single server, acting as a test server for the company, the try applications here before going live on the development servers.

@ methyl, the weird thing is that I only have 16 disks in the EVA, it's weird that it shows 32. All 16 disks of these VGs (vgdvqatr, vgr3dev0, vgr3qas0 and vgr3trn0) appear in the ioscan output and are the same mount points in the table and in the (old saved) bdf output.

There are 5 servers connected to this EVA, and they are functioning properly.

Below the output of cat /etc/fstab

LUNHINGA:[/]#cat /etc/fstab
/dev/vg00/lvol3 / vxfs delaylog 0 1
/dev/vg00/lvol1 /stand vxfs tranflush 0 1
/dev/vg00/lvol4 /tmp vxfs delaylog 0 2
/dev/vg00/lvol5 /home vxfs delaylog 0 2
/dev/vg00/lvol6 /opt vxfs delaylog 0 2
/dev/vg00/lvol7 /usr vxfs delaylog 0 2
/dev/vg00/lvol8 /var vxfs delaylog 0 2
/dev/vg00/lvolswap ... swap pri=1 0 0
/dev/vgr3dev0/lvsapdata1 /oracle/DEV/sapdata1 vxfs rw,suid,nolargefiles,delaylog,datainlog 0 2
/dev/vgr3dev0/lvsapdata2 /oracle/DEV/sapdata2 vxfs rw,suid,nolargefiles,delaylog,datainlog 0 2
/dev/vgr3dev0/lvsapdata3 /oracle/DEV/sapdata3 vxfs rw,suid,nolargefiles,delaylog,datainlog 0 2
/dev/vgr3dev0/lvsapdata4 /oracle/DEV/sapdata4 vxfs rw,suid,nolargefiles,delaylog,datainlog 0 2
/dev/vgr3qas0/lvsapdata1 /oracle/QAS/sapdata1 vxfs rw,suid,nolargefiles,delaylog,datainlog 0 2
/dev/vgr3qas0/lvsapdata2 /oracle/QAS/sapdata2 vxfs rw,suid,nolargefiles,delaylog,datainlog 0 2
/dev/vgr3qas0/lvsapdata3 /oracle/QAS/sapdata3 vxfs rw,suid,nolargefiles,delaylog,datainlog 0 2
/dev/vgr3qas0/lvsapdata4 /oracle/QAS/sapdata4 vxfs rw,suid,nolargefiles,delaylog,datainlog 0 2
/dev/vgr3trn0/lvsapdata1 /oracle/TRN/sapdata1 vxfs rw,suid,nolargefiles,delaylog,datainlog 0 2
/dev/vgr3trn0/lvsapdata2 /oracle/TRN/sapdata2 vxfs rw,suid,nolargefiles,delaylog,datainlog 0 2
/dev/vgr3trn0/lvsapdata3 /oracle/TRN/sapdata3 vxfs rw,suid,nolargefiles,delaylog,datainlog 0 2
/dev/vgr3trn0/lvsapdata4 /oracle/TRN/sapdata4 vxfs rw,suid,nolargefiles,delaylog,datainlog 0 2
/dev/vgr3trn1/lvr3trn1 /oracle/TRN/sapdata5 vxfs rw,suid,nolargefiles,delaylog,datainlog 0 2
/dev/vgdvqatr/lvdvqatr /oracle vxfs rw,suid,nolargefiles,delaylog,datainlog 0 2
/dev/vglunsap/lvlunsap /usr/sap vxfs rw,suid,nolargefiles,delaylog,datainlog 0 2
/dev/vgr3dev1/lvr3dev1 /oracle/DEV/sapdata5 vxfs rw,suid,nolargefiles,delaylog,datainlog 0 2
/dev/vgr3qas1/lvr3qas1 /oracle/QAS/sapdata5 vxfs rw,suid,nolargefiles,delaylog,datainlog 0 2

And cat /etc/lvmrc output

LUNHINGA:[/]#cat /etc/lvmrc
# @(#)B11.23_LR
# /etc/lvmrc
#
# This file is sourced by /sbin/lvmrc. This file contains the flags
# AUTO_VG_ACTIVATE and RESYNC which are required by the script in /sbin/lvmrc.
# These flags must be set to valid values (see below).
#

#
# The activation of Volume Groups may be customized by setting the
# AUTO_VG_ACTIVATE flag to 0 and customizing the function
# custom_vg_activation()
#

#
#       To disable automatic volume group activation,
#       set AUTO_VG_ACTIVATE to 0.
#

AUTO_VG_ACTIVATE=0

#
#       The variable RESYNC controls the order in which
#       Volume Groups are resyncronized. Allowed values
#       are:
#               "PARALLEL"      - resync all VGs at once.
#               "SERIAL"        - resync VGs one at a time.
#
#       SERIAL will take longer but will have less of an
#       impact on overall I/O performance.
#

RESYNC="SERIAL"


#
#       Add customized volume group activation here.
#       A function is available that will synchronize all
#       volume groups in a list in parallel. It is
#       called parallel_vg_sync.
#
#       This routine is only executed if AUTO_VG_ACTIVATE
#       equals 0.
#

custom_vg_activation()
{
        # e.g. /sbin/vgchange -a y -s
        #      parallel_vg_sync "/dev/vg00 /dev/vg01"
        #      parallel_vg_sync "/dev/vg02 /dev/vg03"
        /sbin/vgchange -a y
        parallel_vg_sync "/dev/vgr3dev0 "
        parallel_vg_sync "/dev/vgr3qas0 "
        parallel_vg_sync "/dev/vgr3trn0 "
        parallel_vg_sync "/dev/vgr3trn1 "
        parallel_vg_sync "/dev/vglunsap /dev/vgdvqatr"
        return 0
}

#
# The following functions should require no additional customization:
#

parallel_vg_sync()
{
        for VG in $*
        do
                {
                if /sbin/vgsync $VG > /dev/null
                then
                        echo "Resynchronized volume group $VG"
                fi
                } &
        done
}

Its not weird that it shows 32 if you have 2 controllers that can see the same disks using 2 separate paths... If this is your case then one controller is dead or faulty or the connections... ( could be on EVA side also... but here I am not much of help never had EVAs used only clarrion (once) and HDS systems...)

A mistake not to do in your case is to use vgscan ( you are doomed if you did...) for you have fancy vg and specially lvol names!!! and I suppose you have no vg map files...)

start by a simple ioscan then:

ant:/home/vbe $ ioscan -funC ctl
Class     I  H/W Path       Driver S/W State   H/W Type     Description
========================================================================
ctl       0  0/0/1/0.7.0    sctl   CLAIMED     DEVICE       Initiator
                           /dev/rscsi/c0t7d0
ctl       1  0/0/1/1.7.0    sctl   CLAIMED     DEVICE       Initiator
                           /dev/rscsi/c1t7d0
ctl       2  0/0/2/0.7.0    sctl   CLAIMED     DEVICE       Initiator
                           /dev/rscsi/c2t7d0
ctl       3  0/0/2/1.7.0    sctl   CLAIMED     DEVICE       Initiator
                           /dev/rscsi/c3t7d0
ctl       8  0/4/0/0.1.196.255.0.0.0  sctl   CLAIMED     DEVICE       HITACHI OPEN-V
                           /dev/rscsi/c13t0d0
ctl       5  0/6/0/0.7.0    sctl   CLAIMED     DEVICE       Initiator
                           /dev/rscsi/c7t7d0
ctl       6  0/6/0/1.7.0    sctl   CLAIMED     DEVICE       Initiator
                           /dev/rscsi/c8t7d0
ctl       9  0/7/0/0.1.196.255.0.0.0  sctl   CLAIMED     DEVICE       HITACHI OPEN-V
                           /dev/rscsi/c15t0d0

so that we can see what controllers you have...

None of the controllers is down, I already checked on the EVA Command View.

LUNHINGA:[/]#ioscan -funC ctl
Class     I  H/W Path       Driver     S/W State   H/W Type     Description
============================================================================
ctl       0  0/0/3/0.0.7.0  sctl       CLAIMED     DEVICE       Initiator
                           /dev/rscsi/c0t7d0
ctl       1  0/0/3/0.1.7.0  sctl       CLAIMED     DEVICE       Initiator
                           /dev/rscsi/c1t7d0
ctl       2  0/1/1/0.7.0    sctl       CLAIMED     DEVICE       Initiator
                           /dev/rscsi/c2t7d0
ctl       3  0/1/1/1.7.0    sctl       CLAIMED     DEVICE       Initiator
                           /dev/rscsi/c3t7d0
ctl       8  0/2/1/0.1.0.0.0.0.0    sctl       CLAIMED     DEVICE       HP      HSV200
                           /dev/rscsi/c8t0d0
ctl       5  0/2/1/0.1.0.255.0.0.0  sctl       CLAIMED     DEVICE       HP      HSV200
                           /dev/rscsi/c4t0d0
ctl       9  0/2/1/0.1.1.0.0.0.0    sctl       CLAIMED     DEVICE       HP      HSV200
                           /dev/rscsi/c9t0d0
ctl       4  0/2/1/0.1.1.255.0.0.0  sctl       CLAIMED     DEVICE       HP      HSV200
                           /dev/rscsi/c5t0d0
ctl      10  0/4/2/0.2.0.0.0.0.0    sctl       CLAIMED     DEVICE       HP      HSV200
                           /dev/rscsi/c10t0d0
ctl       7  0/4/2/0.2.0.255.0.0.0  sctl       CLAIMED     DEVICE       HP      HSV200
                           /dev/rscsi/c6t0d0
ctl      11  0/4/2/0.2.1.0.0.0.0    sctl       CLAIMED     DEVICE       HP      HSV200
                           /dev/rscsi/c11t0d0
ctl       6  0/4/2/0.2.1.255.0.0.0  sctl       CLAIMED     DEVICE       HP      HSV200

Below is an output of ioscan -fnC disk of another server connected to this EVA, it shows 16 disks as well, and this server works perfectly.

LUO:[/]#ioscan -fnC disk
Class     I  H/W Path       Driver     S/W State   H/W Type     Description
============================================================================
disk      0  0/0/3/0.0.0.0  sdisk      CLAIMED     DEVICE       TEAC    DV-28E-N
                           /dev/dsk/c0t0d0   /dev/rdsk/c0t0d0
disk      1  0/1/1/0.0.0    sdisk      CLAIMED     DEVICE       HP 146 GST314670 7LC
                           /dev/dsk/c2t0d0     /dev/rdsk/c2t0d0
                           /dev/dsk/c2t0d0s1   /dev/rdsk/c2t0d0s1
                           /dev/dsk/c2t0d0s2   /dev/rdsk/c2t0d0s2
                           /dev/dsk/c2t0d0s3   /dev/rdsk/c2t0d0s3
disk     18  0/1/1/0.1.0    sdisk      CLAIMED     DEVICE       HP 146 GST314670 7LC
                           /dev/dsk/c2t1d0     /dev/rdsk/c2t1d0
                           /dev/dsk/c2t1d0s1   /dev/rdsk/c2t1d0s1
                           /dev/dsk/c2t1d0s2   /dev/rdsk/c2t1d0s2
                           /dev/dsk/c2t1d0s3   /dev/rdsk/c2t1d0s3
disk      3  0/2/1/0.1.0.0.0.0.1  sdisk      CLAIMED     DEVICE       HP      HSV200
                           /dev/dsk/c4t0d1   /dev/rdsk/c4t0d1
disk      5  0/2/1/0.1.0.0.0.0.2  sdisk      CLAIMED     DEVICE       HP      HSV200
                           /dev/dsk/c4t0d2   /dev/rdsk/c4t0d2
disk      7  0/2/1/0.1.0.0.0.0.3  sdisk      CLAIMED     DEVICE       HP      HSV200
                           /dev/dsk/c4t0d3   /dev/rdsk/c4t0d3
disk      9  0/2/1/0.1.0.0.0.0.4  sdisk      CLAIMED     DEVICE       HP      HSV200
                           /dev/dsk/c4t0d4   /dev/rdsk/c4t0d4
disk      2  0/2/1/0.1.1.0.0.0.1  sdisk      CLAIMED     DEVICE       HP      HSV200
                           /dev/dsk/c6t0d1   /dev/rdsk/c6t0d1
disk      4  0/2/1/0.1.1.0.0.0.2  sdisk      CLAIMED     DEVICE       HP      HSV200
                           /dev/dsk/c6t0d2   /dev/rdsk/c6t0d2
disk      6  0/2/1/0.1.1.0.0.0.3  sdisk      CLAIMED     DEVICE       HP      HSV200
                           /dev/dsk/c6t0d3   /dev/rdsk/c6t0d3
disk      8  0/2/1/0.1.1.0.0.0.4  sdisk      CLAIMED     DEVICE       HP      HSV200
                           /dev/dsk/c6t0d4   /dev/rdsk/c6t0d4
disk     10  0/4/2/0.2.0.0.0.0.1  sdisk      CLAIMED     DEVICE       HP      HSV200
                           /dev/dsk/c8t0d1   /dev/rdsk/c8t0d1
disk     12  0/4/2/0.2.0.0.0.0.2  sdisk      CLAIMED     DEVICE       HP      HSV200
                           /dev/dsk/c8t0d2   /dev/rdsk/c8t0d2
disk     14  0/4/2/0.2.0.0.0.0.3  sdisk      CLAIMED     DEVICE       HP      HSV200
                           /dev/dsk/c8t0d3   /dev/rdsk/c8t0d3
disk     16  0/4/2/0.2.0.0.0.0.4  sdisk      CLAIMED     DEVICE       HP      HSV200
                           /dev/dsk/c8t0d4   /dev/rdsk/c8t0d4
disk     11  0/4/2/0.2.1.0.0.0.1  sdisk      CLAIMED     DEVICE       HP      HSV200
                           /dev/dsk/c10t0d1   /dev/rdsk/c10t0d1
disk     13  0/4/2/0.2.1.0.0.0.2  sdisk      CLAIMED     DEVICE       HP      HSV200
                           /dev/dsk/c10t0d2   /dev/rdsk/c10t0d2
disk     15  0/4/2/0.2.1.0.0.0.3  sdisk      CLAIMED     DEVICE       HP      HSV200
                           /dev/dsk/c10t0d3   /dev/rdsk/c10t0d3
disk     17  0/4/2/0.2.1.0.0.0.4  sdisk      CLAIMED     DEVICE       HP      HSV200
                           /dev/dsk/c10t0d4   /dev/rdsk/c10t0d4

and ioscan -funC ctl output

LUO:[/]#ioscan -funC ctl
[Class     I  H/W Path       Driver     S/W State   H/W Type     Description
============================================================================
ctl       0  0/0/3/0.0.7.0  sctl       CLAIMED     DEVICE       Initiator
                           /dev/rscsi/c0t7d0
ctl       1  0/0/3/0.1.7.0  sctl       CLAIMED     DEVICE       Initiator
                           /dev/rscsi/c1t7d0
ctl       2  0/1/1/0.7.0    sctl       CLAIMED     DEVICE       Initiator
                           /dev/rscsi/c2t7d0
ctl       7  0/1/1/1.7.0    sctl       CLAIMED     DEVICE       Initiator
                           /dev/rscsi/c3t7d0
ctl       6  0/2/1/0.1.0.0.0.0.0    sctl       CLAIMED     DEVICE       HP      HSV200
                           /dev/rscsi/c4t0d0
ctl       4  0/2/1/0.1.0.255.0.0.0  sctl       CLAIMED     DEVICE       HP      HSV200
                           /dev/rscsi/c5t0d0
ctl       5  0/2/1/0.1.1.0.0.0.0    sctl       CLAIMED     DEVICE       HP      HSV200
                           /dev/rscsi/c6t0d0
ctl       3  0/2/1/0.1.1.255.0.0.0  sctl       CLAIMED     DEVICE       HP      HSV200
                           /dev/rscsi/c7t0d0
ctl       9  0/4/2/0.2.0.0.0.0.0    sctl       CLAIMED     DEVICE       HP      HSV200
                           /dev/rscsi/c8t0d0
ctl      10  0/4/2/0.2.0.255.0.0.0  sctl       CLAIMED     DEVICE       HP      HSV200
                           /dev/rscsi/c9t0d0
ctl      11  0/4/2/0.2.1.0.0.0.0    sctl       CLAIMED     DEVICE       HP      HSV200
                           /dev/rscsi/c10t0d0
ctl       8  0/4/2/0.2.1.255.0.0.0  sctl       CLAIMED     DEVICE       HP      HSV200
                           /dev/rscsi/c11t0d0

Just noticed your in your lvmrc you are not using auto_activate_vg but have a custom: vg activation, do you know the reason? Have you seen the sequence order you are to respect ?

custom_vg_activation()
{
# e.g. /sbin/vgchange -a y -s
# parallel_vg_sync "/dev/vg00 /dev/vg01"
# parallel_vg_sync "/dev/vg02 /dev/vg03"
/sbin/vgchange -a y
parallel_vg_sync "/dev/vgr3dev0 "
parallel_vg_sync "/dev/vgr3qas0 "
parallel_vg_sync "/dev/vgr3trn0 "
parallel_vg_sync "/dev/vgr3trn1 "
parallel_vg_sync "/dev/vglunsap /dev/vgdvqatr"
return 0
}

What do you have in /etc/lvmconf?
Do you have any output with this:
ioscan -funC fc ?

LUNHINGA:[/]#ioscan -funC fc
Class     I  H/W Path  Driver S/W State   H/W Type     Description
=================================================================
fc        0  0/2/1/0   td   CLAIMED     INTERFACE    HP Tachyon XL2 Fibre Channel Mass Storage Adapter
                      /dev/td0
fc        1  0/4/2/0   td   CLAIMED     INTERFACE    HP Tachyon XL2 Fibre Channel Mass Storage Adapter
                      /dev/td1
LUNHINGA:[/]#cat /etc/lvmconf
��(���`D��,�Ld`x0��H,�|�
     *       vg00.conflvm_lock       y
vg00.conf.old   storage.conf.old       }
                vgstorage.conf Xvgstorage.conf.old
vgr3dev0.conf   �vgr3dev0.conf.old
vgr3qas0.conf
0vgr3qas0.conf.old
vgr3trn0.conf
vgr3trn0.conf.old
vgr3dev1.conf
^vgr3dev1.conf.old
vgr3qas1.conf
Yvgr3qas1.conf.old
vgr3trn1.conf
�vgr3trn1.conf.old
vglunsap.conf   vglunsap.conf.old
vgdvqatr.conf
vgdvqatr.conf.old
�
vg00.mapfile
�vgdvqatr.mapfile
�vglunsap.mapfile
�vgr3dev0.mapfile
�vgr3qas0.mapfile
�vgr3trn0.mapfile
�vgr3trn1.mapfile
�vgr3dev1.mapfile       W�vgr3qas1.mapfile

From the development system, please post the output from:

strings /etc/lvmtab

It is almost as if the development server is seeing the Live servers discs c*t0d* and not seeing its own discs c*t1d* . This is pure guesswork because we don't know the correct LVM configuration or hardware configuration for the development server.

Ps. Just seen post #9. /etc/lvmconf is a directory not a file.

LUNHINGA:[/]#strings /etc/lvmtab
/dev/vg00
/dev/dsk/c2t1d0s2
/dev/dsk/c2t0d0s2
/dev/vgdvqatr
/dev/dsk/c8t0d4
/dev/dsk/c9t0d4
/dev/dsk/c10t0d4
/dev/dsk/c11t0d4
/dev/vglunsap
/dev/dsk/c8t0d5
/dev/dsk/c9t0d5
/dev/dsk/c10t0d5
/dev/dsk/c11t0d5
/dev/vgr3dev0
/dev/dsk/c8t0d1
/dev/dsk/c9t0d1
/dev/dsk/c10t0d1
/dev/dsk/c11t0d1
/dev/vgr3qas0
/dev/dsk/c8t0d2
/dev/dsk/c9t0d2
/dev/dsk/c10t0d2
/dev/dsk/c11t0d2
/dev/vgr3trn0
/dev/dsk/c8t0d3
/dev/dsk/c9t0d3
/dev/dsk/c10t0d3
/dev/dsk/c11t0d3
/dev/vgr3trn1
/dev/dsk/c8t1d0
/dev/dsk/c9t1d0
/dev/dsk/c10t1d0
/dev/dsk/c11t1d0
/dev/vgr3dev1
/dev/dsk/c8t0d6
/dev/dsk/c9t0d6
/dev/dsk/c10t0d6
/dev/dsk/c11t0d6
/dev/vgr3qas1
/dev/dsk/c8t1d1
/dev/dsk/c9t1d1
/dev/dsk/c10t1d1
/dev/dsk/c11t1d1

I dont know what you think methyl, but seems we have duals paths...
If you happen to not declare properly the alternates you do get this sort of issues ...

The contents of /etc/lvmtab is an OMG moment.

My impression (without anywhere near all the facts) is that this is a physical (or logical?) cabling error. The development server is seeing the live discs not the development discs. It now has a corrupt /etc/lvmtab following the ill-advised vgchange -a y in post #1 which is a bit like running vgscan out of context.
Hopefully the O/P should be able to recover /etc/lvmtab and the entire contents of /etc/lvmconf from backup if requested by someone who asks.

I don't know about "vbe" but I am long past the point where I would need physical access to ALL the servers involved (not just the two mentioned).

Time to log a formal call with HP against your HP Maintenance Contract. There is enough evidence in this thread to give them a head start.

Based on what we have seen the development server thankfully cannot mount any partition on the live discs. Personally I would shut down the development server and physically isolate it from the live server's discs pending repair. IMHO, typing any more LVM commands on the development server is foolish.

I agree methyl, since we still dont know how disks are connected but there is 2 FC, this looks very much like what you describe, and it can happen with wrong zoning update...

Check both fc's with

fcmsutil /dev/td0 
fcmsutil /dev/td1

Check for Driver status (should be online) and get both cards WWN.
With WWN information, connect to SAN switch and check zoning.

Does syslog indicate a path loss or any disk subsystem errors ?

Dear Mazen,

I think that you have EVA4000 storage? So probably half of SAN paths are lost, and volume groups are not able to get activated without quorum,
can you post the output of #strings /etc/lvmtab to see the structure of volume groups and advise if it is a MC/SG environment?

Thanks
Francis :slight_smile:

---------- Post updated at 01:39 PM ---------- Previous update was at 01:38 PM ----------

it just noticed the remaining part of posts :slight_smile: