Remove /dev/sdb partition using fdisk - BY ACCIDENT!

Hello everyone -

Please forgive me if I violate the forum's etiquette as this is my very first post. I'm posting this on both the dummies and the advance section with the hope for any responses.

I stumbled on this forum while frantically looking for an answer to a dumb, ignorant thing I did today.

I inherited a system and while doing some exploratory tasks, I used the "fdisk /dev/sdb" command to look inside the disk /dev/sdb. While there, I accidentally hit the 'd', enter, 'w' ... effectively wiped out /dev/sdb?

Here's what I found:
(1) /dev/sdb is a stack of four drives belonging to a hardware RAID5 setup.

(2) The /etc/fstab file has this labeled as "home" and mounted as /home on the system.

(3) Looked like users can still access /home for read/write, and they continues to log in and out.

And so, my question to all the gurus: What was the damage from my using of the fdisk tasks?

It will probably be gone after the next reboot. RAID complicates matters, but I think you just might be lucky enough that it could be recovered if you can figure out what numbers to put in when you create a new partition with exactly the same parameters as the one you deleted.

There are also tools which can help you make an educated guess. TestDisk - CGSecurity is one I personally had some success with under similar circumstances.

This is Linux, right? Make a lot of sense to include platform information for this sort of question.

Backups?

An emergency maintenance should be declared immediately to determine the damage.
Worst case: Users will not be able to commit data to disk so they are losing a days
production and all of their previous data as well.

Sorry, I was in sort of a panic mode and forgot other detail. This is a server running CentOS 4.5. The system partition is on a separate raid1 set and identified as /dev/sda; The one I messed up was /dev/sdb, and it contains all the users "home" directory. As of now, from the user's side, I don't think my mistake affect anyone. But for how long, I really don't know.

I will take a look at TestDisk and would really appreciate all advice and suggestions.

K.

My experience on Debian / Ubuntu suggests that the kernel will stick to the disk geometry it read at boot for a drive which is mounted, but of course, if you unmount, remounting will probably not work until the partition table is sane again. Certainly ramen_noodle's advice is sound.

Things are still running as of today so I'm still puzzled about the extend of the damage. As root, I did a "cp -a" command and copied the entire 1.7+ TB out to another attached raid5 set.

Because this is a hardware raid set, can I assume that the OS just see this set as one single disk drive? In the past, when I had server with a corrupted system drive, I could just physically unplug the data drive and attach it to another running server, then mount this data drive on /mnt and all the data would be there. I'm wondering if I can treat this raid set the same way.

Many thanks for your kind responses.