# lspv
# chdev -l hdisk2 -a pv=clear
# chdev -l hdisk2 -a pv=yes
# lspv
If you see the status change it's an indication that you can at least access the disk correctly. This might go back to the IBM write and verify problem.
I think that you have an exclusive lock, either held by an other LPAR or VIO - you may want to check with the SAN team that the Zoneing is correct and it hasn't been zoned to an other server/vio.
Yes, you are right, it was coming from SAN to VIO Server and from VIO server to LPAR
Now the SAN Connection has been removed from the PSeries Machine
and I tried to remove the DISK from VIO Server but couldn't
# lspv
hdisk0 00c7780e79838606 rootvg active
hdisk8 00c7780e8945b5bb patchtest active
hdisk9 00c7780e8945b5bb patchtest active
# lsdev -Cc disk
hdisk0 Available 09-08-00-3,0 16 Bit LVD SCSI Disk Drive
hdisk8 Available 0A-09-02 MPIO Other DS4K Array Disk
hdisk9 Available 0A-09-02 MPIO Other DS4K Array Disk
# varyoffvg patchtest
0516-062 lqueryvg: Unable to read or write logical volume manager
record. PV may be permanently corrupted. Run diagnostics
0516-012 lvaryoffvg: Logical volume must be closed. If the logical
volume contains a filesystem, the umount command will close
the LV device.
0516-942 varyoffvg: Unable to vary off volume group patchtest.
How can I remove the volume group and the disks ? thanks
I am confused as to how the disk is coming to client?
Have you created a VG on VIOS, created LV and giving those LV's as vscsi disks to client? or you have mapped the whole disk to client?
If your answer is latter then you cannot create a VG on VIOS.
Can you provide the below info
On VIOS (as padmin)
lsdev -slots
lsmap -vadapter vscsiX -all X=client ID
lspv -free
Now as root
lsvg -l patchtest
lsvg -p patchtest
The Disk were created on SAN then it were mapped to VIO Server
From VIO server it was mapped to the LPAR
The disk in question on VIO Server is hdisk8 and hdisk9 with the volume group called patchtest. I was able to remove hdisk8 with rmdev -dl command.
but I could not remove hdisk9
Ok, so LV is given as disk to client.
Do this
On VIOS (as root)
lsvg -o
lsvg -l <vgname>
In which VG do you find the LV testpatch?
Now go to client and do rmdev -Rdl hdisk0
Now go to VIOS (run as padmin) rmvdev -vtd vtscsi31
The above command will remove the mapping from vhost2 for that LV.
If you run cfgmgr on client you won't find the hdisk0 now.
Now remove the LV testpatch from VIOS rmlv -f testpatch
If the VG has NO more LVs mapped to any other partition then it will varyoff, if not then you varyoffvg the VG
okay, thanks
on the client side after removing the hdisk0 it is gone, however on the VIOS side
$ rmvdev -vtd vtscsi31
$ rmlv -f testpatch
*******************************************************************************
The command's response was not recognized. This may or may not indicate a problem.
*******************************************************************************
*******************************************************************************
The command's response was not recognized. This may or may not indicate a problem.
*******************************************************************************
rmlv: Unable to remove logical volume testpatch.
$ oem_setup_env
# rmlv -f testpatch
0516-062 lquerylv: Unable to read or write logical volume manager
record. PV may be permanently corrupted. Run diagnostics
0516-062 lqueryvg: Unable to read or write logical volume manager
record. PV may be permanently corrupted. Run diagnostics
0516-912 rmlv: Unable to remove logical volume testpatch.
# varyoffvg testpatch
0516-306 getlvodm: Unable to find volume group testpatch in the Device
Configuration Database.
0516-942 varyoffvg: Unable to vary off volume group testpatch.
# lspv
hdisk0 00c7780e79838606 rootvg active
hdisk1 00c7780e2e21ec86 diskpool_4 active
hdisk2 00c7780ea5bd16bb diskpool_4 active
hdisk3 00c7780ee224f286 disk_pool_5 active
hdisk4 00c7780e1b75933b diskpool_3 active
hdisk5 00c7780ece91bde2 diskpool_2 active
hdisk6 00c7780ec2b65f4d diskpool_1 active
hdisk7 00c7780e5293914b None
hdisk9 00c7780e8945b5bb patchtest active
I feel there is alteast one more LV (may be more) that is still assigned to client(s).
Ok , do this
On VIOS as root
lsfs
lsvg -l `lsvg`
Compare the output of those two, see which LV is missing from "lsvg -l 'lsvg`" output.
Look for that LV and see if that is assigned as backing device to any other client.
# unmount /space
unmount: 0506-347 Cannot find anything to unmount.
# unmount /dev/fslv00
unmount: 0506-347 Cannot find anything to unmount.
# rmlv /dev/fslv00
Warning, all data contained on logical volume /dev/fslv00 will be destroyed.
rmlv: Do you wish to continue? y(es) n(o)? yes
0516-306 getlvodm: Unable to find /dev/fslv00 in the Device
Configuration Database.
0516-912 rmlv: Unable to remove logical volume /dev/fslv00.
# rmlv -f /dev/fslv00
0516-306 getlvodm: Unable to find /dev/fslv00 in the Device
Configuration Database.
0516-912 rmlv: Unable to remove logical volume /dev/fslv00.
How about if I remove it from the file /etc/filesystems the entry for /space ? and then try it or do it need to reboot the machine ?