Let me give a bit of background before asking my question:
I have created a RAID 0 (stripe) on 2 disks, I have the OS running on a third disk and I have now performed a ufsdump / ufsrestore from my third disk to the RAID 0 disks (/dev/md/dsk/d10)
From what I understand, ufsrestore does NOT carry over bootblock data and this is fine and dandy on a single drive, but when I try to add the bootblock to the RAID 0 I get the following:
# /usr/sbin/installboot /usr/platform/`uname -i`/lib/fs/ufs/bootblk /dev/md/dsk/d10
/dev/md/dsk/d10: Not a character device
I know I still need to run metaroot but that is for mounting the disk AFTER I boot the system (is this the correct logic?)
If anyone could point me in the right direction it would be greatly appreciated; I searched the forums and google but I can't seem to find anything for a RAID 0
I noticed with a RAID 1, you are to setup the bootblock BEFORE you setup RAID 1, but how do you setup RAID 0 with a bootblock?
You cannot boot from a software implemented Raid 0, because the bootblock has to be read before the operating system and all md-drivers are loaded. metaroot's error message says it all:
There are three kinds of simple metadevices: concatenated metadevices, striped metadevices, and concatenated striped metadevices
You can use a simple metadevice containing multiple slices for any file system except the following:
Root (/)
/usr
swap
/var
/opt
Any file system accessed during an operating system upgrade or installation
hergp can you help me understand why a RAID 1 works as opposed to a RAID 0?
---------- Post updated at 04:06 PM ---------- Previous update was at 03:27 PM ----------
Can anyone provide any insight?
I understand that you cannot use metaroot to edit the /etc/system and /etc/vfstab if your stripe spans more than 1 slice
but doesn't a 2-way or 3-way RAID 1 setup span more than 1 slice?
I only have enough drives to test a 1-way RAID 1 which works
---------- Post updated at 04:30 PM ---------- Previous update was at 04:06 PM ----------
I've done some more reading and I gave some more thought to how metaroot works in a RAID 1 setup (mirror)
When setting up a mirror, you setup metaroot on a single slice BEFORE attaching the first submirror
When setting up a striped metadevice (RAID 0), you HAVE to setup the metadevice across at least 2 slices which then prevents metaroot from working (as it will only work on a single slice)
in a raid 1 (no matter how many disks you have in it), every subdisk (diskslice) is identical to the others and to the raidset itself. Therefore every subdisk in a raid 1 has the full information of the filesystem built on top of the raidset. As long as you do not update the filesystem (mount it read/write), you can use a subdisk instead of the raidset without messing things up.
What happens when the system boots (from a ufs root filesystem) is this:
The OBP reads the bootblock from one of the disks (the details are configured in the OBP), then the bootblock loads a secondary bootloader called ufsboot, which loads the kernel and essential drivers. Later, when the kernel has enough information to understand mirrored disks, it switches from using the subdisk to the mirrored logical volume for the root filesystem.
When the root filesystem is scattered over more than one disk, all this is not possible (at least as long as OBP has no builtin means to understand logical volumes).
I hope, this helps to shed some light on the boot process with logical volumes.