Setting up bootblock on RAID 0 SVM

Hi All!

I'm running Solaris 10 and SPARC and using

Let me give a bit of background before asking my question:

I have created a RAID 0 (stripe) on 2 disks, I have the OS running on a third disk and I have now performed a ufsdump / ufsrestore from my third disk to the RAID 0 disks (/dev/md/dsk/d10)

From what I understand, ufsrestore does NOT carry over bootblock data and this is fine and dandy on a single drive, but when I try to add the bootblock to the RAID 0 I get the following:

# /usr/sbin/installboot /usr/platform/`uname -i`/lib/fs/ufs/bootblk /dev/md/dsk/d10
/dev/md/dsk/d10: Not a character device

I know I still need to run metaroot but that is for mounting the disk AFTER I boot the system (is this the correct logic?)

If anyone could point me in the right direction it would be greatly appreciated; I searched the forums and google but I can't seem to find anything for a RAID 0

I noticed with a RAID 1, you are to setup the bootblock BEFORE you setup RAID 1, but how do you setup RAID 0 with a bootblock?

Thanks so much everyone!

you need to install the bootblock on character device which should be

And you need to run the metaroot before booting the system, and need to place the entry for metadevice for root in /etc/vfstab file and /etc/system

You need to run it on the /dev/md/rdsk/d10 device.

Thanks a bunch!
That was a simple fix :slight_smile: I should've thought to try that ><

I was hoping I could get assistance with the metaroot cmd

When I run metaroot, I get the following error

# metaroot -n d10
metaroot: Stripe d10 has more than 1 slice

Here is some info on my RAID 0 setup

Here's the partition table of one of the disks

Part      Tag    Flag     Cylinders         Size            Blocks
  0       root    wm     413 - 14075       66.30GB    (13663/0/0) 139034688
  1       swap    wu       0 -   412        2.00GB    (413/0/0)     4202688
  2     backup    wm       0 - 14086       68.35GB    (14087/0/0) 143349312
  3 unassigned    wm       0                0         (0/0/0)             0
  4 unassigned    wm       0                0         (0/0/0)             0
  5 unassigned    wm   14076 - 14086       54.66MB    (11/0/0)       111936
  6 unassigned    wm       0                0         (0/0/0)             0
  7 unassigned    wm       0                0         (0/0/0)             0

# metastat
d10: Concat/Stripe
    Size: 278069376 blocks (132 GB)
    Stripe 0: (interlace: 32 blocks)
        Device     Start Block  Dbase   Reloc
        c0t0d0s0          0     No      Yes
        c0t2d0s0          0     No      Yes

Device Relocation Information:
Device   Reloc  Device ID
c0t0d0   Yes    id1,sd@SFUJITSU_MAW3073NCSUN72G_000633B0G4UE____DAN0P680G4UE
c0t2d0   Yes    id1,sd@SFUJITSU_MAW3073NCSUN72G_000626B0F330____DAN0P660F330

# prtvtoc /dev/md/dsk/d10
* /dev/md/dsk/d10 partition map
*
* Dimensions:
*     512 bytes/sector
*     424 sectors/track
*      24 tracks/cylinder
*   10176 sectors/cylinder
*   27327 cylinders
*   27327 accessible cylinders
*
* Flags:
*   1: unmountable
*  10: read-only
*
* Unallocated space:
*       First     Sector    Last
*       Sector     Count    Sector
*           0     10176     10175
*
*                          First     Sector    Last
* Partition  Tag  Flags    Sector     Count    Sector  Mount Directory
       0      0    00      10176 278069376 278079551

Any ideas?

You cannot boot from a software implemented Raid 0, because the bootblock has to be read before the operating system and all md-drivers are loaded. metaroot's error message says it all:

Stripe d10 has more than 1 slice

Thanks hergp

I found this URL on sun docs for anyone out there having the same issue

2. Metadevices (Solstice DiskSuite 4.2.1 Reference Guide) - Sun Microsystems

There are three kinds of simple metadevices: concatenated metadevices, striped metadevices, and concatenated striped metadevices

You can use a simple metadevice containing multiple slices for any file system except the following:

  • Root (/)
  • /usr
  • swap
  • /var
  • /opt
  • Any file system accessed during an operating system upgrade or installation

hergp can you help me understand why a RAID 1 works as opposed to a RAID 0?

---------- Post updated at 04:06 PM ---------- Previous update was at 03:27 PM ----------

Can anyone provide any insight?

I understand that you cannot use metaroot to edit the /etc/system and /etc/vfstab if your stripe spans more than 1 slice

but doesn't a 2-way or 3-way RAID 1 setup span more than 1 slice?

I only have enough drives to test a 1-way RAID 1 which works

---------- Post updated at 04:30 PM ---------- Previous update was at 04:06 PM ----------

I've done some more reading and I gave some more thought to how metaroot works in a RAID 1 setup (mirror)

When setting up a mirror, you setup metaroot on a single slice BEFORE attaching the first submirror

When setting up a striped metadevice (RAID 0), you HAVE to setup the metadevice across at least 2 slices which then prevents metaroot from working (as it will only work on a single slice)

Does this theory make sense?

Keepcase,

in a raid 1 (no matter how many disks you have in it), every subdisk (diskslice) is identical to the others and to the raidset itself. Therefore every subdisk in a raid 1 has the full information of the filesystem built on top of the raidset. As long as you do not update the filesystem (mount it read/write), you can use a subdisk instead of the raidset without messing things up.

What happens when the system boots (from a ufs root filesystem) is this:
The OBP reads the bootblock from one of the disks (the details are configured in the OBP), then the bootblock loads a secondary bootloader called ufsboot, which loads the kernel and essential drivers. Later, when the kernel has enough information to understand mirrored disks, it switches from using the subdisk to the mirrored logical volume for the root filesystem.

When the root filesystem is scattered over more than one disk, all this is not possible (at least as long as OBP has no builtin means to understand logical volumes).

I hope, this helps to shed some light on the boot process with logical volumes.

Thank you hergp!

That makes a lot of sense :slight_smile: