Corrupted Hard Drive

I am running FC-7 which I realize is an older distro. But my question would apply to any distro.

I ran fsck on my mounted file system (I know, I shouldn't have). Now it won't boot. I get a kernel panic message.

I booted to a Knoppix Live Cd.

The desktop icon shows /dev/sda2 mounted at /media/sda2. When I perform ls -l on /media/sda2 it shows total 0.

When I perform dumpe2fs /dev/sda2 I get this message:
dumpe2fs 1.40-WIP (14-Nov-2006)
dumpe2fs: Bad magic number in super-block while trying to open /dev/sda2
Couldn't find valid filesystem superblock.

Is there any chance of recovering data from the drive or am I out of luck?

fdisk -l yields these results:
root@Knoppix:/ramdisk/home/knoppix# fdisk -l /dev/sda1

Disk /dev/sda1: 106 MB, 106896384 bytes
255 heads, 63 sectors/track, 12 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sda1 doesn't contain a valid partition table
root@Knoppix:/ramdisk/home/knoppix# fdisk -l /dev/sda2

Disk /dev/sda2: 79.9 GB, 79925045760 bytes
255 heads, 63 sectors/track, 9717 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sda2 doesn't contain a valid partition table

pvdisplay, vgadisplay, and lvdisplay yields these results:

root@Knoppix:/ramdisk/home/knoppix# pvdisplay
--- Physical volume ---
PV Name /dev/sda2
VG Name VolGroup00
PV Size 74.41 GB / not usable 0
Allocatable yes
PE Size (KByte) 32768
Total PE 2381
Free PE 1
Allocated PE 2380
PV UUID 34v31h-rQS5-Lzjd-WuQC-F1EA-DnE2-Sj1ugr

root@Knoppix:/ramdisk/home/knoppix# vgdisplay
--- Volume group ---
VG Name VolGroup00
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 3
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 2
Open LV 0
Max PV 0
Cur PV 1
Act PV 1
VG Size 74.41 GB
PE Size 32.00 MB
Total PE 2381
Alloc PE / Size 2380 / 74.38 GB
Free PE / Size 1 / 32.00 MB
VG UUID dhzswy-nDmJ-l32M-bTwJ-5tue-Qn1k-Ie9aad

root@Knoppix:/ramdisk/home/knoppix# lvdisplay
--- Logical volume ---
LV Name /dev/VolGroup00/LogVol00
VG Name VolGroup00
LV UUID fhLm4I-YxGM-uPrp-p54E-L0BZ-6RB1-4p9njg
LV Write Access read/write
LV Status NOT available
LV Size 72.44 GB
Current LE 2318
Segments 1
Allocation inherit
Read ahead sectors 0

--- Logical volume ---
LV Name /dev/VolGroup00/LogVol01
VG Name VolGroup00
LV UUID eFHGdu-0lUm-x2qa-vlJD-hayB-20ZT-wbJfYl
LV Write Access read/write
LV Status NOT available
LV Size 1.94 GB
Current LE 62
Segments 1
Allocation inherit
Read ahead sectors 0

Any suggestion or help would be appreciated. Thanks in advance.

Before we get too carried away, you're using fdisk wrong... sda1 shouldn't contain a partition table. Try fdisk -l /dev/sda.

Corona688, thanks for the prompt reply. Here is result of fdisk:
root@Knoppix:~# fdisk -l /dev/sda
Disk /dev/sda: 80.0 GB, 80032038912 bytes
255 heads, 63 sectors/track, 9730 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 * 1 13 104391 83 Linux
/dev/sda2 14 9730 78051802+ 8e Linux LVM
root@Knoppix:~#

Also, this is the message I see when trying to boot FC-7:
Uncompressing Linux... OK, booting the kernel
Red Hat nash version 6.0.9 starting
Reading all physical volumes. This may take a while
Found volume group "VolGroup00" using metadata type lvm2
2 logical volume(s) in volume group "VolGroup00" now active
VFS: Can't find ext3 filesystem on dev dm-0
mount: error mounting /dev/root on /sysroot as ext3: Invalid argument
setuproot: moving /dev failed: No such file or directory
setuproot: error mounting /proc: No such file or directory
setuproot: error mounting /sys: No such file or directory
switchroot: mount failed: No such file or directory
Kernel panic - not syncing: Attempted to kill init!

And then the system hangs.

If I interrupt the bootup sequeuence I am able to get to a command
line grub prompt by seleting "e" "a" or "o".

Your situation sounds all too familiar unfortunately. Were you able to fix the partition tables and save your data?

The kernel panic is due to the nash script interpreter, which is part of initramfs ( switchroot, etc. are nash subcommands), being unable to mount the file systems.

Of more interest is the fact both logical volumes are not available according to the output from lvdisplay

...
LV Status NOT available
...

If the logical volumes are not available for some reason, then the filesystems on these logical volumes are not available, and thus cannot be mounted.

One possibility is that the LVM metadata is corrupt. LVM maintains the metadata backup in /etc/lvm/backup and /etc/lvm/archive, if you haven't turned off the auto backup feature of LVM. Try restoring the LVM metadata using vgcfgrestore. This should work if the damage to LVM metadata are minor. If the damage to LVM metadata is major, the disk will not be recognized. In such cases, you first need to restore the UUID for the missing device. To do so, compare the output of pvscan and cat /proc/partitions and paste the required UUID using the results of pvscan. You can then use vgcfgrestore to restore LVM metadata. If either of these methods work, make sure to fsck the filesystem(s) before mounting.