Hi,
I am facing one issue on Centos 7 server. Seems like two issues and both may be related.
/home is missing and I can't he device to mount it.
There is no output if I run vgs, pvs or lvs
[root@broken-server ~]# cat /etc/fstab | grep centos
/dev/centos/root / xfs defaults,_netdev 0 0
/dev/centos/home /home xfs defaults,_netdev 0 0
/dev/centos/swap swap swap defaults,_netdev 0 0
[root@broken-server ~]# lsblk -f
NAME FSTYPE LABEL UUID MOUNTPOINT
sda
└─sda1 ext3 0dd9a419-6752-473e-b49b-2138f6519827 /boot
sdb mpath_member
└─sdb2 LVM2_member ckoWGr-3kEW-NQRb-Ma18-OkXz-uxrb-HxemwY
├─centos-root xfs 54b2cd4a-dc96-484a-a204-839b0c1ba58a /
└─centos-swap swap 894e6bbb-2c0d-43cf-93bf-a0d35bd5c97a [SWAP]
[root@broken-server ~]# fdisk -l | grep Disk | grep GB
WARNING: fdisk GPT support is currently new, and therefore in an experimental phase. Use at your own discretion.
Disk /dev/sda: 16.0 GB, 16022241280 bytes, 31293440 sectors
Disk /dev/sdb: 1099.5 GB, 1099511627776 bytes, 2147483648 sectors
Disk /dev/mapper/centos-root: 53.7 GB, 53687091200 bytes, 104857600 sectors
Disk /dev/mapper/centos-swap: 17.2 GB, 17179869184 bytes, 33554432 sectors
[root@broken-server ~]# vgs
[root@broken-server ~]# pvs
[root@broken-server ~]# lvs
[root@broken-server ~]#
[root@broken-server ~]# lvmdiskscan -v
Wiping cache of LVM-capable devices
/dev/centos/root [ 50.00 GiB]
/dev/sda1 [ 976.00 MiB]
/dev/centos/swap [ 16.00 GiB]
/dev/sdb2 [ 1023.07 GiB] LVM physical volume
2 disks
1 partition
0 LVM physical volume whole disks
1 LVM physical volume
[root@broken-server ~]#
I have another similar setup server to compare, which shows all correct
[root@good-server ~]# cat /etc/fstab | grep centos
/dev/centos/root / xfs defaults,_netdev 0 0
/dev/centos/home /home xfs defaults,_netdev 0 0
/dev/centos/swap swap swap defaults,_netdev 0 0
[root@good-server ~]# lsblk -f
NAME FSTYPE LABEL UUID MOUNTPOINT
sda
└─sda1 ext3 7a8015e1-e147-40ae-b5ac-da2134c6788a /boot
sdb
├─sdb1 ext3 7664e2be-eafe-40dd-9f67-afbdab685d12
└─sdb2 LVM2_member uoanJQ-TF4J-uNOz-5Q4R-7qfV-ThHC-5WU6ok
├─centos-root xfs 4fd04bf3-1aab-45f8-b262-20eb3bc4cf78 /
├─centos-swap swap d6734505-00db-4093-a878-5078d2a3e90c [SWAP]
└─centos-home xfs 95de2ea1-ae13-4a6d-a978-a14aebd770b0 /home
[root@good-server ~]# fdisk -l | grep Disk | grep GB
WARNING: fdisk GPT support is currently new, and therefore in an experimental phase. Use at your own discretion.
Disk /dev/sda: 16.0 GB, 16022241280 bytes, 31293440 sectors
Disk /dev/sdb: 1099.5 GB, 1099511627776 bytes, 2147483648 sectors
Disk /dev/mapper/centos-root: 53.7 GB, 53687091200 bytes, 104857600 sectors
Disk /dev/mapper/centos-swap: 17.2 GB, 17179869184 bytes, 33554432 sectors
Disk /dev/mapper/centos-home: 1027.6 GB, 1027642228736 bytes, 2007113728 sectors
[root@good-server ~]# vgs
VG #PV #LV #SN Attr VSize VFree
centos 1 3 0 wz--n- 1023.07g 0
[root@good-server ~]# pvs
PV VG Fmt Attr PSize PFree
/dev/sdb2 centos lvm2 a-- 1023.07g 0
[root@good-server ~]# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
home centos -wi-ao---- 957.07g
root centos -wi-ao---- 50.00g
swap centos -wi-ao---- 16.00g
[root@good-server ~]#
[root@good-server ~]# lvmdiskscan -v
Wiping cache of LVM-capable devices
/dev/centos/root [ 50.00 GiB]
/dev/sda1 [ 976.00 MiB]
/dev/centos/swap [ 16.00 GiB]
/dev/centos/home [ 957.07 GiB]
/dev/sdb1 [ 953.00 MiB]
/dev/sdb2 [ 1023.07 GiB] LVM physical volume
3 disks
2 partitions
0 LVM physical volume whole disks
1 LVM physical volume
[root@good-server ~]#
Can someone shed some light on this, how can I fix it ?
Thanks
Hello,
It does look like something odd is going on there, yes. If you run the commands pvscan
, vgscan
and lvscan
(in that order), do you then see anything different in the output of pvs
/ vgs
/ lvs
?
No, still same
[root@broken-server ~]# pvscan
No matching physical volumes found
[root@broken-server ~]# vgscan
Reading volume groups from cache.
[root@broken-server ~]# lvscan
[root@broken-server ~]#
[root@broken-server ~]# pvs
[root@broken-server ~]# vgs
[root@broken-server ~]# lvs
[root@broken-server ~]#
OK, thanks. On one hand I'm tempted to suggest a reboot for this system to see if it then detects its devices correctly, but on the other hand I'm reluctant to do that, since it seems that as it stands it's not properly detecting any of the LVM physical volumes, volume groups or logical volumes, which is a bit concerning.
Could you try doing the scan commands again, running each with a -v
flag to increase their verbosity ? You can keep adding additional flags for increased debug output (e.g. -vv
, -vvv
, etc.), though how useful the output will be will depend on what's going on really.
Lastly, has there been any change to how the underlying /dev/sdb
device is connected to the system ? If you check the kernel logs, are there any mentions of timeouts or disconnects or anything otherwise sinister related to this (or indeed any other) storage device attached to the system ?
I tried that but couldn't see any helpful logs. Server rebooted already couple of times and it comes up in same state. Other than server physical move, there was no change on /dev/sdb or any other disks on this server.
[root@broken-server ~]# vgscan -vv
Setting activation/monitoring to 1
Setting global/locking_type to 1
Setting global/wait_for_locks to 1
File-based locking selected.
Setting global/prioritise_write_locks to 1
Setting global/locking_dir to /run/lock/lvm
Setting global/use_lvmlockd to 0
Locking /run/lock/lvm/P_global WB
Wiping cache of LVM-capable devices
Wiping internal VG cache
Setting response to OK
Setting token to filter:3239235440
Setting daemon_pid to 1539
Setting response to OK
Setting global_disable to 0
Reading volume groups from cache.
Setting response to OK
Setting response to OK
Setting response to OK
No volume groups found.
Unlocking /run/lock/lvm/P_global
Setting global/notify_dbus to 1
[root@broken-server ~]# pvscan -vv
Setting activation/monitoring to 1
Setting global/locking_type to 1
Setting global/wait_for_locks to 1
File-based locking selected.
Setting global/prioritise_write_locks to 1
Setting global/locking_dir to /run/lock/lvm
Setting global/use_lvmlockd to 0
Setting response to OK
Setting token to filter:3239235440
Setting daemon_pid to 1539
Setting response to OK
Setting global_disable to 0
Locking /run/lock/lvm/P_global WB
report/output_format not found in config: defaulting to basic
log/report_command_log not found in config: defaulting to 0
Wiping internal VG cache
Wiping cache of LVM-capable devices
Setting response to OK
Setting response to OK
Setting response to OK
Setting response to OK
Setting response to OK
Setting response to OK
/dev/sda: size is 31293440 sectors
/dev/centos/root: size is 104857600 sectors
/dev/centos/root: using cached size 104857600 sectors
/dev/sda1: size is 1998848 sectors
/dev/sda1: using cached size 1998848 sectors
/dev/centos/swap: size is 33554432 sectors
/dev/centos/swap: using cached size 33554432 sectors
/dev/sdb: size is 2147483648 sectors
/dev/sdb2: size is 2145529737 sectors
/dev/sdb2: using cached size 2145529737 sectors
Locking /run/lock/lvm/P_orphans RB
Reading VG #orphans_lvm1 <no vgid>
Unlocking /run/lock/lvm/P_orphans
Locking /run/lock/lvm/P_orphans RB
Reading VG #orphans_pool <no vgid>
Unlocking /run/lock/lvm/P_orphans
Locking /run/lock/lvm/P_orphans RB
Reading VG #orphans_lvm2 <no vgid>
Unlocking /run/lock/lvm/P_orphans
No matching physical volumes found
Unlocking /run/lock/lvm/P_global
Setting global/notify_dbus to 1
[root@broken-server ~]#
[root@broken-server ~]# lvscan -vv
Setting activation/monitoring to 1
Setting global/locking_type to 1
Setting global/wait_for_locks to 1
File-based locking selected.
Setting global/prioritise_write_locks to 1
Setting global/locking_dir to /run/lock/lvm
Setting global/use_lvmlockd to 0
Setting response to OK
Setting token to filter:3239235440
Setting daemon_pid to 1539
Setting response to OK
Setting global_disable to 0
report/output_format not found in config: defaulting to basic
log/report_command_log not found in config: defaulting to 0
Setting response to OK
Setting response to OK
Setting response to OK
No volume groups found.
Setting global/notify_dbus to 1
[root@broken-server ~]#
Is /etc/lvm/ okay?
Compare it with your working system.
One other thing worth checking is if /dev/sdb2 actually does still contain a valid LVM PV. What does the output of file -s /dev/sdb2
show ?
/etc/lvm looks okay. Probably lvm metadata may have some issues.
[root@broken-server ~]# ls -l /etc/lvm
total 188
drwx------. 2 root root 107 Aug 16 2022 archive
drwx------. 2 root root 20 Aug 16 2022 backup
drwx------. 2 root root 6 Jun 29 2017 cache
-rw-r--r--. 1 root root 93224 Jun 29 2017 lvm.conf
-rw-r--r--. 1 root root 93224 Jun 29 2017 lvm.conf-08012023
-rw-r--r--. 1 root root 2301 Jun 29 2017 lvmlocal.conf
drwxr-xr-x. 2 root root 220 Jul 6 2017 profile
[root@broken-server ~]# file -s /dev/sdb2
/dev/sdb2: LVM2 PV (Linux Logical Volume Manager), UUID: ckoWGr-3kEW-NQRb-Ma18-OkXz-uxrb-HxemwY, size: 1098511225344
[root@broken-server ~]#
--------
[root@good-server ~]# ls -l /etc/lvm
total 88
drwx------. 2 root root 108 Aug 16 2022 archive
drwx------. 2 root root 20 Aug 16 2022 backup
drwx------. 2 root root 6 Jun 29 2016 cache
-rw-r--r--. 1 root root 82209 Jun 29 2016 lvm.conf
-rw-r--r--. 1 root root 2244 Jun 29 2016 lvmlocal.conf
drwxr-xr-x. 2 root root 196 Sep 9 2016 profile
[root@good-server ~]# file -s /dev/sdb2
/dev/sdb2: LVM2 PV (Linux Logical Volume Manager), UUID: uoanJQ-TF4J-uNOz-5Q4R-7qfV-ThHC-5WU6ok, size: 1098511225344
[root@good-server ~]#
I suspect your /dev/sdb disk does not fit for your /etc/fstab and /etc/lvm/ files.
Was it replaced by a wrong disk?
Look for the expected UUID in the /etc/lvm/ files, and compare with the found UUID (lsblk -f)
Strange, the centos-root with the /etc/ obviously stems from the /dev/sdb ...
Maybe a faulty data restore has happened?
This is a very strange one to be sure. The thing here that really makes no sense is that whilst on the one hand the system claims to have no LVM PVs, VGs or LVs whatsoever when asked, it still somehow manages to find and mount the root filesystem, which is part of the "centos" VG.
I'd agree with @MadeInGermany that something must have happened to the underlying physical device /dev/sdb at some point. Either the device has changed, or something has gone wrong with its contents in some highly unusual way. In any case at the moment the "home" LV does not appear to exist on that disk in its present form.
Do you have known-good backups of this server that you could restore the /home filesystem from, if need be ? Even if you don't restore it just now, is it possible to extract from the backups (if they exist, which they hopefully do) the original contents of /etc/fstab
and /etc/lvm/
and compare them with what's on the host just now ?
What is what I see from lsblk and old lvm backup -
[root@broken-server /]# lsblk -f
NAME FSTYPE LABEL UUID MOUNTPOINT
sda
└─sda1 ext3 0dd9a419-6752-473e-b49b-2138f6519827 /boot
sdb mpath_member
└─sdb2 LVM2_member ckoWGr-3kEW-NQRb-Ma18-OkXz-uxrb-HxemwY
├─centos-root xfs 54b2cd4a-dc96-484a-a204-839b0c1ba58a /
└─centos-swap swap 894e6bbb-2c0d-43cf-93bf-a0d35bd5c97a [SWAP]
[root@broken-server /]# cd /etc/lvm/
[root@broken-server lvm]# grep -Ri "id = " *
archive/centos_00000-1151642432.vg: id = "WBHlAP-iqxk-734y-uF85-srWu-0vSc-8Gmyyq"
archive/centos_00000-1151642432.vg: id = "So6ouR-uKOX-9hUs-XLA2-s6lh-OGaI-zg2WJw"
archive/centos_00000-1151642432.vg: id = "QS5IMH-NfFM-Ryp2-85Cu-kODK-WOsU-EGfVoC"
archive/centos_00000-1151642432.vg: id = "a2rWdP-Nfic-y2GA-63z7-w8Za-PHNM-Kkf8ae"
archive/centos_00000-1151642432.vg: id = "bXc9wc-1bFG-2P7z-2ced-MMi6-d1Ip-Gxh6J7"
archive/centos_00001-1033822430.vg: id = "WBHlAP-iqxk-734y-uF85-srWu-0vSc-8Gmyyq"
archive/centos_00001-1033822430.vg: id = "So6ouR-uKOX-9hUs-XLA2-s6lh-OGaI-zg2WJw"
archive/centos_00001-1033822430.vg: id = "QS5IMH-NfFM-Ryp2-85Cu-kODK-WOsU-EGfVoC"
archive/centos_00001-1033822430.vg: id = "a2rWdP-Nfic-y2GA-63z7-w8Za-PHNM-Kkf8ae"
archive/centos_00001-1033822430.vg: id = "bXc9wc-1bFG-2P7z-2ced-MMi6-d1Ip-Gxh6J7"
archive/centos_00002-950898683.vg: id = "EQFLuq-j3Y1-wXuJ-MA1Y-aN4l-4YuO-YlL2f3"
archive/centos_00002-950898683.vg: id = "ckoWGr-3kEW-NQRb-Ma18-OkXz-uxrb-HxemwY"
archive/centos_00002-950898683.vg: id = "oKuZt7-uuP6-HOUm-LZdf-Mm56-ZM1W-1h4zSX"
archive/centos_00002-950898683.vg: id = "cOQrYJ-pza9-8u2f-nIxv-diSG-KCBz-AJ4S20"
archive/centos_00002-950898683.vg: id = "q6V5Yh-w3eY-Cy5L-ecmQ-eGnd-i2Ee-xCU0gc"
backup/centos: id = "EQFLuq-j3Y1-wXuJ-MA1Y-aN4l-4YuO-YlL2f3"
backup/centos: id = "ckoWGr-3kEW-NQRb-Ma18-OkXz-uxrb-HxemwY"
backup/centos: id = "oKuZt7-uuP6-HOUm-LZdf-Mm56-ZM1W-1h4zSX"
backup/centos: id = "cOQrYJ-pza9-8u2f-nIxv-diSG-KCBz-AJ4S20"
backup/centos: id = "q6V5Yh-w3eY-Cy5L-ecmQ-eGnd-i2Ee-xCU0gc"
lvm.conf: require_restorefile_with_uuid = 1
lvm.conf-08012023: require_restorefile_with_uuid = 1
lvmlocal.conf: # system_id = ""
lvmlocal.conf: # system_id = "host1"
lvmlocal.conf: # system_id = ""
lvmlocal.conf: # host_id = 0
[root@broken-server lvm]#
If I check id of home (the one which is missing), in all archive and backup LVM data, then I see two different id's -
/etc/lvm/archive/centos_00000-1151642432.vg -->
home {
id = "a2rWdP-Nfic-y2GA-63z7-w8Za-PHNM-Kkf8ae"
/etc/lvm/archive/centos_00001-1033822430.vg -->
home {
id = "a2rWdP-Nfic-y2GA-63z7-w8Za-PHNM-Kkf8ae"
/etc/lvm/archive/centos_00002-950898683.vg -->
home {
id = "q6V5Yh-w3eY-Cy5L-ecmQ-eGnd-i2Ee-xCU0gc"
/etc/lvm/backup/centos -->
home {
id = "q6V5Yh-w3eY-Cy5L-ecmQ-eGnd-i2Ee-xCU0gc"
-rw-------. 1 root root 2123 Dec 2 2016 /etc/lvm/archive/centos_00000-1151642432.vg
-rw-------. 1 root root 2129 Aug 16 2022 /etc/lvm/archive/centos_00001-1033822430.vg
-rw-------. 1 root root 2125 Aug 16 2022 /etc/lvm/archive/centos_00002-950898683.vg
-rw-------. 1 root root 2124 Aug 16 2022 /etc/lvm/backup/centos
Edit - One (important) update now. I disabled lvmetad and rebooted server and /home came up fine.
Now I am atleast having my data of home, but two minor issues.
lvmetad service is disabled and I am seeing sdc, which is not supposed to be here and not in other servers
[root@broken-server ~]# lsblk -f
NAME FSTYPE LABEL UUID MOUNTPOINT
sda
└─sda1 ext3 0dd9a419-6752-473e-b49b-2138f6519827 /boot
sdb mpath_member
└─mpatha
├─mpatha1 ext3 8100912c-c4f5-488b-8d9e-2e6c93d5e37e
└─mpatha2 LVM2_member ckoWGr-3kEW-NQRb-Ma18-OkXz-uxrb-HxemwY
├─centos-root xfs 54b2cd4a-dc96-484a-a204-839b0c1ba58a /
├─centos-swap swap 894e6bbb-2c0d-43cf-93bf-a0d35bd5c97a [SWAP]
└─centos-home xfs 11f18806-30e4-4681-bac0-b15dd90cb3a1 /home
sdc mpath_member
└─mpatha
├─mpatha1 ext3 8100912c-c4f5-488b-8d9e-2e6c93d5e37e
└─mpatha2 LVM2_member ckoWGr-3kEW-NQRb-Ma18-OkXz-uxrb-HxemwY
├─centos-root xfs 54b2cd4a-dc96-484a-a204-839b0c1ba58a /
├─centos-swap swap 894e6bbb-2c0d-43cf-93bf-a0d35bd5c97a [SWAP]
└─centos-home xfs 11f18806-30e4-4681-bac0-b15dd90cb3a1 /home
[root@broken-server ~]#
1 Like
That's interesting, thanks for the update. Presumably the metadata being cached by lvmetad
was invalid, and not refreshing. With it disabled LVM is actually having to go and look at what's on disk each time, and so now is seeing the LVM objects that do exist.
As to why you're seeing a /dev/sdc
in addition to /dev/sdb
- this looks like it could be a separate multipath issue maybe ? The UUIDs of the partitions on both devices are identical, it seems. If you look under /dev/mapper
, do you see a /dev/mapper/mpatha
device ? If so, then this is the one you should use when referring to or trying to access the partitions on the multipath devices. Do you have the multipathd
service enabled, and if so, is it started and does it have an OK status ?
I guess I am on right path. I re-enabled lvm2-lvmetad.service, though not restarted server yet.
multipathd service is running. When I said I am not supposed to see sdc, I think I am wrong. This seems another path for same device sdb, because of multipath.
Is it looking good configuration now ?
[root@broken-server ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/centos-root 50G 15G 36G 30% /
devtmpfs 94G 0 94G 0% /dev
tmpfs 94G 188K 94G 1% /dev/shm
tmpfs 94G 9.7M 94G 1% /run
tmpfs 94G 0 94G 0% /sys/fs/cgroup
/dev/sda1 945M 267M 629M 30% /boot
tmpfs 19G 16K 19G 1% /run/user/1000
/dev/mapper/centos-home 957G 368G 590G 39% /home
tmpfs 19G 16K 19G 1% /run/user/42
tmpfs 19G 0 19G 0% /run/user/0
[root@broken-server ~]# ls -ltr /dev/mapper
total 0
crw-------. 1 root root 10, 236 Aug 5 09:20 control
lrwxrwxrwx. 1 root root 7 Aug 5 09:20 centos-root -> ../dm-3
lrwxrwxrwx. 1 root root 7 Aug 5 09:20 centos-swap -> ../dm-4
lrwxrwxrwx. 1 root root 7 Aug 5 09:20 mpatha -> ../dm-0
lrwxrwxrwx. 1 root root 7 Aug 5 09:20 mpatha2 -> ../dm-2
lrwxrwxrwx. 1 root root 7 Aug 5 09:20 mpatha1 -> ../dm-1
lrwxrwxrwx. 1 root root 7 Aug 5 09:20 centos-home -> ../dm-5
[root@broken-server ~]# service multipathd status
Redirecting to /bin/systemctl status multipathd.service
â—Ź multipathd.service - Device-Mapper Multipath Device Controller
Loaded: loaded (/usr/lib/systemd/system/multipathd.service; enabled; vendor preset: enabled)
Active: active (running) since Sat 2023-08-05 09:20:00 AEST; 19h ago
Main PID: 1638 (multipathd)
CGroup: /system.slice/multipathd.service
└─1638 /sbin/multipathd
Aug 05 09:20:00 broken-server systemd[1]: Started Device-Mapper Multipath Device Controller.
Aug 05 09:20:00 broken-server multipathd[1638]: mpatha: load table [0 2147483648 multipath 1 queue_if_no_path 0 1 1 service-time 0 2 1 8:16 1 8:32 1]
Aug 05 09:20:00 broken-server multipathd[1638]: mpatha: event checker started
Aug 05 09:20:00 broken-server multipathd[1638]: path checkers start up
Aug 05 09:20:00 broken-server multipathd[1638]: sdc: add path (uevent)
Aug 05 09:20:00 broken-server multipathd[1638]: sdc: spurious uevent, path already in pathvec
Aug 05 09:20:00 broken-server multipathd[1638]: sdb: add path (uevent)
Aug 05 09:20:00 broken-server multipathd[1638]: sdb: spurious uevent, path already in pathvec
Aug 05 09:20:00 broken-server multipathd[1638]: sda: add path (uevent)
Aug 05 09:20:00 broken-server multipathd[1638]: sda: spurious uevent, path already in pathvec
[root@broken-server ~]# multipath -ll
mpatha (36000d310009899000000000000000056) dm-0 COMPELNT,Compellent Vol
size=1.0T features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='service-time 0' prio=1 status=active
|- 16:0:0:0 sdb 8:16 active ready running
`- 17:0:0:0 sdc 8:32 active ready running
[root@broken-server ~]# lsblk -f
NAME FSTYPE LABEL UUID MOUNTPOINT
sda
└─sda1 ext3 0dd9a419-6752-473e-b49b-2138f6519827 /boot
sdb mpath_member
└─mpatha
├─mpatha1 ext3 8100912c-c4f5-488b-8d9e-2e6c93d5e37e
└─mpatha2 LVM2_member ckoWGr-3kEW-NQRb-Ma18-OkXz-uxrb-HxemwY
├─centos-root xfs 54b2cd4a-dc96-484a-a204-839b0c1ba58a /
├─centos-swap swap 894e6bbb-2c0d-43cf-93bf-a0d35bd5c97a [SWAP]
└─centos-home xfs 11f18806-30e4-4681-bac0-b15dd90cb3a1 /home
sdc mpath_member
└─mpatha
├─mpatha1 ext3 8100912c-c4f5-488b-8d9e-2e6c93d5e37e
└─mpatha2 LVM2_member ckoWGr-3kEW-NQRb-Ma18-OkXz-uxrb-HxemwY
├─centos-root xfs 54b2cd4a-dc96-484a-a204-839b0c1ba58a /
├─centos-swap swap 894e6bbb-2c0d-43cf-93bf-a0d35bd5c97a [SWAP]
└─centos-home xfs 11f18806-30e4-4681-bac0-b15dd90cb3a1 /home
[root@broken-server ~]#
This looks basically correctly I think, yes. As mentioned previously what you're seeing with /dev/sdb
and /dev/sdc
are the underlying multipath devices that are then represented to the system logically as /dev/mapper/mpatha
, and LVM seems to have picked up on all your PVs, VGs and LVs correctly. So with any luck, that should be everything working again now.