Is my XIV device open?

Hi there,

I am trying to determine if a specific XIV LUN on AIX is in OPEN or CLOSED state. I would like tthe same info as I can get from

pcmpath

command on v7000 LUN-s. I would need this info to find out if I can safely delete a LUN or not.

I tried fuser command, but no luck. It did not show anything fo PV-s that azre part of varyed-on VG-s.

Also tries

xiv_devlist -o multipath

but it tells every path OK even if they are in use or not. (failed/missing can be found, but that's not what I am looking for)

After a log of google searches I have no more ideas today. Anybody else?

--Trifo

please try these two commands - if you get an error the disk is closed, if you get any content, the disk is open / readable

readvgda /dev/hdiskX

and

lqueryvg -At -p hdisk0
1 Like

zxmaus: thanks, but these commands are not what I meant. At least according to my findings.

Some of my systems have lots of LUN-s, used for various purposes:

  • used as PV-s in some VG-s (mostly rootvg)
  • used as GPFS NSD backing devices
  • used as Oracle ASM data disks
    and some more.

Your commands can show relevant info only for LUNS used as PV disks, but they are misleading on other cases.

Now I can try to run

rmdev -l hdiskXX

and see if it can put the device into DEFINED state. When the device is open, I get an error and the device state is kept unchanged. But I would like to find a way to obtain the same info without modifying the system. By the way, how does rmdev find this info?

--Trifo

if you do this on ASM disks or gpfs disks you are wiping the header. The device will stay up til your next reboot and be beautiful and clean after the reboot so this is a terrible idea.

You do get an output from the commands I listed even with ASM devices and GPFS disks and even from completely unassigned open disks. It might be a cryptic unreadable output but it is an output. If you get nothing or a one-line-error the disks are not open.

But maybe I have a different understanding what open means. Are you trying to find out if the disks are unused? Are you using AIX mpio or any kind of multipathing software? With ASM, before you make ANY changes to the disks, ask the DBAs to backup the disk headers. Strictly speaking by design, AIX has no idea if the disks are used or not - but oracle will hold a lock on them while still allocated - which is a curse and a blessing for above reason - you will STILL wipe the header. For gpfs, the cluster itself should be able to tell you which disks it is using - try

mmlsnsd

command. For normal disks in VGs, a simple

lspv

will tell you which disks are not in use from LVM perspective.

1 Like

Hi zxmaus,

Well, the problematic environement is some old AIX versiond (AIX 5.3) Any newer systems (from AIX6.1 and up) there is the

lsmpio

command, which can be asked to tell if a path is in open or closed state. Older systems do not have this. For MPIO, we used SDDPCM back then, but it did not support XIV storages, thus XIV LUNs needed theyr own driver.

The SDDPCM is able to report open/close states, as in the next example:

# pcmpath query device 15

DEV#:  15  DEVICE NAME: hdisk15  TYPE: 2107900  ALGORITHM:  Load Balance
SERIAL: 75AT241003F
===========================================================================
Path#      Adapter/Path Name          State     Mode     Select     Errors
    0           fscsi1/path1          CLOSE   NORMAL        141          0
    1           fscsi0/path0          CLOSE   NORMAL        131          0
# pcmpath query device 44

DEV#:  44  DEVICE NAME: hdisk44  TYPE: 2107900  ALGORITHM:  Load Balance
SERIAL: 75AT2410114
===========================================================================
Path#      Adapter/Path Name          State     Mode     Select     Errors
    0           fscsi0/path0           OPEN   NORMAL   14375551          0
    1           fscsi1/path1           OPEN   NORMAL   14377624          0

XIV tools do not provide this info.

xiv_devlist -o device,multipath

only tells if paths are in Available state, or not (defined, missing, failed...) But that info is not enough to tell if the hdisk can be removed or not.

Well, there are hundreds of LUN-s and I would need an automated way to report unused disks.

Now, let's see the above 2 hdisks in your way:

# lqueryvg -At -p hdisk15
0516-304 lqueryvg: Unable to find device id hdisk15 in the Device
        Configuration Database.
0516-066 lqueryvg: Physical volume is not a volume group member.
        Check the physical volume name specified.
#
# lqueryvg -At -p hdisk44
0516-304 lqueryvg: Unable to find device id hdisk44 in the Device
        Configuration Database.
0516-1339 lqueryvg: Physical volume contains some 3rd party volume group.

The latter is reporting that the disk is managed by a 3rd party VG - in this case GPFS. But there is no output if the disk is idle or is actually opened by GPFS daemons.

Let's see the readvgda version:

# readvgda /dev/hdisk15
WARNING, invalid LVM record (no _LVM tag)!
WARNING, invalid PV number (0) in the LVM record!
WARNING, invalid PP size (0) in the LVM record!
*****************************************
LVMREC at block 7
*****************************************
lvmid:       0 (0)
vgid:     00000000000000000000000000000000
lvmarea_len: 0
vgda_len:    0
vgda_psn[0]: 0
vgda_psn[1]: 0
reloc_psn:   0
pv_num:      0
pp_size:     0
vgsa_len:    0
vgsa_psn[0]: 0
vgsa_psn[1]: 0
version:     0
vg_type:     0
ltg_shift:   0(128K)

*=============== 1ST VGDA-VGSA: /dev/hdisk15 ===============*

*****************************************
VGSA at block 0
*****************************************
*****************************************
vgsa beg: timestamp 0 (0), 0 (0)
vgsa beg: timestamp Thu Jan  1 01:00:00 NFT:1970
vgsa.pv_missing:        0
vgsa.factor:    0
vgsa.pad2:      0 0 0
vgsa end: timestamp 0 (0), 0 (0)
vgsa end: timestamp Thu Jan  1 01:00:00 NFT:1970
*****************************************
VGDA at block 0
*****************************************
*****************************************
vgh.vg_id:    00000000000000000000000000000000
vgh.numlvs:      0
vgh.maxlvs:      0
vgh.pp_size:     0
vgh.numpvs:      0
vgh.total_vgdas: 0
vgh.vgda_size:   0
vgh.quorum:      0
vgh.auto_varyon: 0
vgh.check_sum:   0
vgh.snapshotvg:  0
vgh.snapshot_copy: 0
vgh.primary_vgid: 00000000000000000000000000000000
vgh.seconadary_vgid: 00000000000000000000000000000000
vgda hdr: timestamp 0 (0), 0 (0)
vgda hdr: timestamp Thu Jan  1 01:00:00 NFT:1970
vgda size read is from vgh is < 0 assuming vgda_size = SML_VGDA_LEN
*****************************************
vgt.concurrency:        0
vgda trl: timestamp 0 (0), 0 (0)
vgda trl: timestamp Thu Jan  1 01:00:00 NFT:1970


# readvgda /dev/hdisk43
WARNING, invalid PV number (0) in the LVM record!
WARNING, invalid PP size (0) in the LVM record!
*****************************************
LVMREC at block 7
*****************************************
lvmid:       1598838349 (5f4c564d)
vgid:     00007d0100007d0100007d0100007d01
lvmarea_len: 0
vgda_len:    0
vgda_psn[0]: 0
vgda_psn[1]: 0
reloc_psn:   0
pv_num:      0
pp_size:     0
vgsa_len:    0
vgsa_psn[0]: 0
vgsa_psn[1]: 0
version:     32001
vg_type:     0
ltg_shift:   0(128K)

*=============== 1ST VGDA-VGSA: /dev/hdisk43 ===============*

*****************************************
VGSA at block 0
*****************************************
*****************************************
vgsa beg: timestamp 0 (0), 0 (0)
vgsa beg: timestamp Thu Jan  1 01:00:00 NFT:1970
vgsa.pv_missing:        0
vgsa.factor:    0
vgsa.pad2:      0 0 0
vgsa end: timestamp 0 (0), 0 (0)
vgsa end: timestamp Thu Jan  1 01:00:00 NFT:1970
*****************************************
VGDA at block 0
*****************************************
*****************************************
vgh.vg_id:    00000000000000000000000000000000
vgh.numlvs:      0
vgh.maxlvs:      0
vgh.pp_size:     0
vgh.numpvs:      0
vgh.total_vgdas: 0
vgh.vgda_size:   0
vgh.quorum:      0
vgh.auto_varyon: 0
vgh.check_sum:   0
vgh.snapshotvg:  0
vgh.snapshot_copy: 0
vgh.primary_vgid: 00000000000000000000000000000000
vgh.seconadary_vgid: 00000000000000000000000000000000
vgda hdr: timestamp 0 (0), 0 (0)
vgda hdr: timestamp Thu Jan  1 01:00:00 NFT:1970
vgda size read is from vgh is < 0 assuming vgda_size = SML_VGDA_LEN
*****************************************
vgt.concurrency:        0
vgda trl: timestamp 0 (0), 0 (0)
vgda trl: timestamp Thu Jan  1 01:00:00 NFT:1970

It seems thad readvgda just tryed it's best to read what might be a VGDA block on the disks, but there is obviously no relevant info. I can not see relevant difference between the outputs from 'in use' and 'not in use' disks. Am I correct this way?

--Trifo

yes you are correct - that is why I asked yesterday what you meant with open - for both disks you get output - so the disks are open - so correctly zoned and readable/writable.

As stated before, as you have this strange mix of different types of disks, you will have to determine the hard way which ones you are using. I am guessing the asm disks have been defined by you via mknod command so you should be able via major/minor number to determine which those are - but I don't think you can find out if they are used or not without asking your DBAs what they have configured.

1 Like

Awwwww...

This year I want to get rid of every old stuff.

--Trifo