I would really appreciate help on recovering some data.
I have a RAID1 array under /dev/md0 composed of devices /dev/sdb and /dev/sdc. I had lvm on the raid with 3 lv, the last of which was a snapshot partition.
My / is on /dev/sda. Recently, I reinstalled the system (Debian lenny) and reconfigured /dev/sda with lvm.
When I booted, the system correctly determined the presence of the RAID array and even activated it. All I had to do is import /dev/md0 back into the system.
However, out of stupidity, I ran
# sudo pvcreate -v /dev/md0
Set up physical volume for "/dev/md0" with 488396928 available sectors
Zeroing start of device /dev/md0
Physical volume "/dev/md0" successfully created
As it says, it overwrote the start of the device, the data is has not been affected. Now I don't know how to access the LVs on /dev/md0.
Does anyone have any suggestions on how to recover the information about the lvm volumes on the disk?
I do not have the backup file of the volume group on the raid, as I just reinstalled the system over it. Otherwise I would have just followed the instructions here.
Have you done ANYTHING to md0 or the underlying devices since then??
I believe that md0 uses an on-disk block for tracking meta information, but that essentially, the underlying partitions are simply kept in sync as normal filesystems are. Translation: you can directly mount the underlying file system. If they're not too out-of-sync, you can just fsck and mount. If the fsck fails at first because the md0 blasted it (unlikely), you can just use a backup superblock and specify that to fsck.
No, I don't think so, unless you also striped across partitions. But I'm far from certain. The safe thing to do is go to the physical partiitions and try to fsck them with -n:
fsck -n /dev/hd3a
or whatever. Keep trying different superblocks until you get a hit. Also try LinuxQuestions.org, but make sure you direct your question to an advanced forum.
[630884.179103] ReiserFS: dm-3: warning: sh-2021: reiserfs_fill_super: can not find reiserfs on dm-3
There are supposed to be 2 volumes with data, data--raid-media a ReiserFS partition, and data--raid-user a LUKS encrypted one and a snapshot volume of the media partition.
Okay, well it would have helped if at first you were using ReiserFS. Second, you completely ignored the gist of my instructions. It's possible by re-creating the meta data, you corrupted the existing partition. Hopefully another expert can help you here.
Or, (and I say this tongue-in-cheek), you could write to Hans Reiser in jail asking him to help you. I'm sure he has plenty of time.
there is a reiser fsck. I think fsck has the ability to find it (fsck.reiserfs or something), but I'm not sure. I also found it to be not as good at recovery as ext3. That conclusion is based on warning messages and the amount of self-reporting diagnostics indicating data was lost.
As far as I'm concerned, the biggest advantage of reiserfs -- directory filename hashing -- was lost when ext2/3 added the -O dir_index feature. I suppose some still consider "tail packing" very interesting, but with drives as large, fast, and cheap as they are these days, this benefit seems to have been marginalized.
My $0.02.