LVM restore / recovery

Hello,

I would really appreciate help on recovering some data.

I have a RAID1 array under /dev/md0 composed of devices /dev/sdb and /dev/sdc. I had lvm on the raid with 3 lv, the last of which was a snapshot partition.

My / is on /dev/sda. Recently, I reinstalled the system (Debian lenny) and reconfigured /dev/sda with lvm.
When I booted, the system correctly determined the presence of the RAID array and even activated it. All I had to do is import /dev/md0 back into the system.

However, out of stupidity, I ran

    # sudo pvcreate -v /dev/md0
    Set up physical volume for "/dev/md0" with 488396928 available sectors
    Zeroing start of device /dev/md0
  Physical volume "/dev/md0" successfully created 

As it says, it overwrote the start of the device, the data is has not been affected. Now I don't know how to access the LVs on /dev/md0.

Does anyone have any suggestions on how to recover the information about the lvm volumes on the disk?

I do not have the backup file of the volume group on the raid, as I just reinstalled the system over it. Otherwise I would have just followed the instructions here.

Thank you in advance.

Ouch!

Have you done ANYTHING to md0 or the underlying devices since then??

I believe that md0 uses an on-disk block for tracking meta information, but that essentially, the underlying partitions are simply kept in sync as normal filesystems are. Translation: you can directly mount the underlying file system. If they're not too out-of-sync, you can just fsck and mount. If the fsck fails at first because the md0 blasted it (unlikely), you can just use a backup superblock and specify that to fsck.

No, I haven't done anything to it.

But how exactly do I mount lvm volumes without them being in a group? Does mounting do anything to the underlying devices as you ask?

No, I don't think so, unless you also striped across partitions. But I'm far from certain. The safe thing to do is go to the physical partiitions and try to fsck them with -n:

fsck -n /dev/hd3a

or whatever. Keep trying different superblocks until you get a hit. Also try LinuxQuestions.org, but make sure you direct your question to an advanced forum.

I've made some progress. Here's a summary of my steps so far:

# Searched for lvm config

sudo dd if=/dev/sdb bs=512 count=255 skip=1 of=/temp.txt 

# Filtered the output, saved the file to /etc/lvm/backup/data-raid
# Extracted the UID of pv0 (/dev/md0) and ran

pvcreate -ff -v -u JgOakP-gVVs-SfhX-lEXi-x4fQ-3gXz-3d17FN /dev/md0 

# Restored lvm w/

vgcfgrestore -f /etc/lvm/backup/data-raid data-raid  

# Imported the volume group with

vgimport data-raid 

# Brought it up

vgchange -ay data-raid  
# ls -l /dev/mapper
total 0
crw-rw---- 1 root root  10, 60 2008-10-11 20:16 control
brw-rw---- 1 root disk 253,  7 2008-10-19 10:12 data--raid-user
brw-rw---- 1 root disk 253,  4 2008-10-19 10:12 data--raid-media
brw-rw---- 1 root disk 253,  3 2008-10-19 10:12 data--raid-media-real
brw-rw---- 1 root disk 253,  6 2008-10-19 10:12 data--raid-snap--media
brw-rw---- 1 root disk 253,  5 2008-10-19 10:12 data--raid-snap--media-cow

# Tried mounting w/ following fstab conf

/dev/mapper/data--raid-media    /media/Pictures    reiserfs    defaults,noatime    0    0

it failed with

[630884.179103] ReiserFS: dm-3: warning: sh-2021: reiserfs_fill_super: can not find reiserfs on dm-3

There are supposed to be 2 volumes with data, data--raid-media a ReiserFS partition, and data--raid-user a LUKS encrypted one and a snapshot volume of the media partition.

Any ideas what could have gone wrong?

Okay, well it would have helped if at first you were using ReiserFS. Second, you completely ignored the gist of my instructions. It's possible by re-creating the meta data, you corrupted the existing partition. Hopefully another expert can help you here.

Or, (and I say this tongue-in-cheek), you could write to Hans Reiser in jail asking him to help you. I'm sure he has plenty of time.

All is not lost. I can just copy the backup drive onto my testing drive again.

In my understanding fsck doesn't fix ReiserFS, does it?

there is a reiser fsck. I think fsck has the ability to find it (fsck.reiserfs or something), but I'm not sure. I also found it to be not as good at recovery as ext3. That conclusion is based on warning messages and the amount of self-reporting diagnostics indicating data was lost.

As far as I'm concerned, the biggest advantage of reiserfs -- directory filename hashing -- was lost when ext2/3 added the -O dir_index feature. I suppose some still consider "tail packing" very interesting, but with drives as large, fast, and cheap as they are these days, this benefit seems to have been marginalized.
My $0.02.