Best softwareraid for 2 disk root-server

Hi, we have a root-server with 2 Disks, each 750 gb / centos 5.1

we would like to have 4 mirrored data, and 1 mirrored boot-filesystem

what do you think is the best solution, I would like to use linux lvm for a raid1-mirror, but I was told the system wont boot in case of a disk failure

then I need a mirrored /boot, which must be outside lvm

another option is building a linux software raid, and using this device for lvm

what would you do?

cheers funksen

My suggestion: use both disks for RAID 1 (really the full disk, not partitions). Create 2 partitons, one /boot (about 20M should be enough), one for exclusive use by LVM, and create all other filesystems as lvols using LVM. That way it's still bootable even if the kernel doesn't know about the RAID and it's loaded later on.

I would disagree with pludi,

RH even says, about 70MB is the minimum these days for /boot, as you may have multiple kernels in there and 20MB may make this impossible. I would suggest about 100-200MB at least for /boot, and if you are not using all of it, on a 750GB drive, so be it.
This is what I did on my server (however, I run SUSE)

/boot is /dev/md0 (you will have to put a grub entry for both disk paths and install grub on both disks)

/dev/md1 is the rest of the drive, controlled by LVM.

/dev/md3 is a RAID 10(1E) on 5 drives.

If you like, what I might suggest, is to do this

/boot 150MB /dev/md0 RAID 1
(grub on both disks, entries in both menu.lst)

/ 5-10GB /dev/md1 RAID 1

<rest> LVM /dev/md2 RAID 10
(yes, you can do this with 2 drives)

As for your data, I would also suggest RAID 10. This is going to increase the performance without losing reliability. If you simply mirror your data drives, you are going to have multiple copies. Linux RAID 10 is more similar to RAID 1E and I would advise against doing a RAID of RAIDs on your devices, as you can do it in one fell swoop and a double RAIDed device can be accidentally set up as a mirror of stripes (Lower fault tolerance) rather than a stripe of mirrors.

hi,

thanks for your replies, both are very helpful

I finally chose md0 100mb /boot

and the rest md1 700gb controlled by lvm, and 4 logical volumes

I feel more familiar with raid1 then raid5, don't know why :slight_smile:

since I'm coming from the aix-corner, I don't understand the lvm set up on raid1

lvm even on linux is able to mirror a disk, so isn't there a way to turn quorum off, and go with

md0 /boot 100mb

rest /dev/sda1 and /dev/sdb1 used with lvm to mirror the lvs? then you would have one layer less, and one point of failure less, or am I wrong?
is the only problem, that linux won't boot with a disk missing?

since the server is set up, I won't change it, but just for further systems :slight_smile:

cheers funksen

I don't think anybody suggested RAID 5. The small write performance hit on RAID 5, especially in software, depending on disk, queue and implementation is a problem.

However, RAID 10 will give you the best performance in that scenario. With RAID 1, the only thing you can hope for is to do 2 mirrors and concatenate them. That introduces a possible point of failure. RAID 10 would solve that problem for you.

RAID 5 is not the same as RAID 10, in that RAID 5 is simply striping with parity, not using a single parity drive, but spreading the xor data across the drives so that each write involves writing the parity information. In a degraded state, the calculations needed to create the lost data from parity can be pretty significant.

In RAID 10, this is simply reading from a complete copy, however you do not get the ability to grow the RAID set like you can with RAID 5 (however, this does decrease fault tolerance with each additional drive, but increases performance).