I am unable to remove lv copy from jfs2log lv as I need to replace broken disk on a mirrored rootvg, is there a way that I can fix this?
bash-4.2# lsvg -p rootvg
rootvg:
PV_NAME PV STATE TOTAL PPs FREE PPs FREE DISTRIBUTION
hdisk1 active 735 724 146..137..147..147..147
hdisk4 active 735 20 00..00..00..00..20
bash-4.2#
bash-4.2# unmirrorvg rootvg hdisk1
0516-076 lreducelv: Cannot remove last good copy of stale partition.
Resynchronize the partitions with syncvg and try again.
0516-922 rmlvcopy: Unable to remove logical partition copies from
logical volume loglv00.
0516-1135 unmirrorvg: The unmirror of the volume group failed.
The volume group is still partially or fully mirrored.
bash-4.2# bootlist -m normal -o
hdisk4 blv=hd5 pathid=0
bash-4.2#
bash-4.2# rmlvcopy loglv00 1 hdisk1
0516-076 lreducelv: Cannot remove last good copy of stale partition.
Resynchronize the partitions with syncvg and try again.
0516-922 rmlvcopy: Unable to remove logical partition copies from
logical volume loglv00.
bash-4.2#
With the disk still Active, perhaps the log volume copy that is alive is actually on the failing disk. Try to sync the logical volume and then see if you can drop the copy on hdisk1. Because it's the only mirrored LV left, you can get away with synchronising the whole volume group.
If it refuses, try to remove the logical copy without specifying a disk. If it drops the copy on hdisk4 then use migratepv to move the LV from hdisk1 to hdisk4
I forgot to add, the server has a disk failure and I am trying to remove the mirror on the failed disk via unmirrorvg, rmlvcopy but failed only at the loglv since the state was already "stale", syncvg also failed tu make the loglv sync.
It's odd that it won't sync given that the disk is still listed as Active. If it were broken and unusable, you would get Failed on the output from lsvg -p rootvg
Can you show us the relevant sections from errpt -a for the failing disk and the output from lspv -M hdisk1
You might be left with starting in single-user from external media. If you can and make the volume group available, you might be able to forcibly remove it and then create a new one of the same name.
Of course, I have no way to test this theory at all. Make sure you have a working mksysb that you can boot from if need be, because you might need to DR over your existing machine. What other volume groups do you have in play?
The problem you may have is that the logical volumes in the volume group will probably be using it as their log volume. I'm not sure how it's still running if the disk blocks for the log volume are inaccessible.
unmount /usr/local (I think this is the filesystem using loglv00 as logging device. In doubt you can try to find it using grep -p /dev/loglv00 /etc/filesystems ).
you are absolutely correct, the loglv00 were used for /opt and /usr/local.
I had not been able to unmount both filesystems even though from fuser I don't see any processes running on both mountpoints (probably inittab is respawning some processes that uses these 2 mountpoints).
So what I did.
create new jfs2log on the same rootvg
use chfs to assign the newly created jfs2log to both filesystems
Reboot the machine, (server rebooted fine, but I had to perform fsck because those 2 filesystems were dirty)