SPARC T4-1/Solaris 11/Add 2 new HDDs in RAID 0 configuration

Hi,

Couple of sentences for background: I'm a software developer, whose task was to create a server software for our customer. Software is ready for deployment and customer has a new T4-1 SPARC, but somehow it also became my task also to setup the server. I have managed to get the server is up and running (Solaris was pre-installed), and software works on it.
Our server is SPARC T4-1 running Solaris 11. There are 2 HDDs in the system now (in HDD0 and HDD4 slots), and now we would like to add another two in raid 0 configuration (striped) for better performance.
I'm fairly proficient in Linux, so unix environment doesn't feel out of place for me, but configuring and administering Solaris is all new to me (this is my first time to touch one)

Can someone help me with some concrete step-by-step instructions how to

  • Choose which slots I should populate with the new HDDs ?
  • How can I configure & format, to mount them in raid 0 configuratin ?

I'm sure in proficient hands, this should be walk in the park, but for me unfortunately I'm not even sure where to start. If this was a clean system, I could easily start trial and error setup, but I don't want to mess-up customers server, as it's now working as hoped..
Thank you for any help!

Current configuration:

$ prtconf -l |more
...
ORCL,SPARC-T4-1 location: /dev/chassis//SYS/HDD0/disk
scsi_vhci, instance #0 location: /dev/chassis//SYS/HDD0/disk
disk, instance #4 location: /dev/chassis//SYS/HDD0/disk
disk, instance #6 location: /dev/chassis//SYS/HDD4/disk

$ diskinfo
D:devchassis-path c:occupant-compdev
/dev/chassis//SYS/HDD0/disk c0t5000CCA0259D2B78d0
/dev/chassis//SYS/HDD4/disk c0t5000CCA0259C678Cd0

$ cfgadm -al
Ap_Id Type Receptacle Occupant Condition
c2 scsi-sas connected configured unknown
c2::dsk/c2t6d0 CD-ROM connected configured unknown
c4 scsi-sas connected configured unknown
c4::w5000cca0259d2b79,0 disk-path connected configured unknown
c5 scsi-sas connected unconfigured unknown
c6 scsi-sas connected configured unknown
c6::w5000cca0259c678d,0 disk-path connected configured unknown
c7 scsi-sas connected unconfigured unknown
-- USB stuff from here... --

You obviously know Linux and RAID so I won't go into huge detail. Just post back any questions.

  1. With Solaris running enter "format" and write down what volumes you can see (eg. c0t0d0, etc). These are disks already configured. Shutdown the system.
  2. I don't think it matters which slots you put the new drives into.
    Configure a new 2-disk RAID0

Configuring Hardware RAID - SPARC T4-1 Server HTML Document Collection

  1. Boot up Solaris again. Run the format command and you will have a new disk volume (as compared to (1) above). This is the one to format, newfs, and mount. Don't touch the originals as you will screw your existing O/S.

Hope that helps.

Hi hicksd8,

Thanks for your reply!

So, I have been trying to gather some information, can you please correct me if I'm wrong in the following:

  1. I need to login back to ILOM somehow. How can I prevent Solaris from booting ? I mean if I shutdown Solaris with shutdown command, I can't login to the system, right ? Or, will ILOM run if I don't press the power button to turn the system on ? (I havent tried that)

  2. In ILOM, I need to use FCode utility, to select proper scsi controller, to which my new hdd's are connected

  3. With show-children, I should now see the new HDDs, and then, I need to select the new hdds (say I have a (old), b (old), and new c & d): c d create-raid0-volume.
    show-volumes for verification, and unselect-dev to release the controller..

  4. Now I can turn the Solaris back on

  5. After bootup, format utility should show one more disk (I had 2 disks before).. Or how should the format see the drive ?
    When should I run the cngadm -c to configure my new disks ?

  6. After this just do as you suggest, format, make filesystem and mount

I hope you can understand my thinking here, and I'm not writing to cryptically...
Thanks!

why so hard and not just a ZFS mirror? it's one command to create a new filesystem with the 2 disks and there is no need for a reboot...

Yes, that's a great idea DukeNuke2. Forget what said, that should be easier. Do you know whether your O/S is on UFS or ZFS? Doesn't matter if you don't mind mixing it up. It will give you a result.

1 Like

Hmm, does that mean a software raid in that case ? That would hurt the performance, somewhat, no ?
I need to check whether the current disk is ZFS, or what... Im not sure. Why is that important in this case ? Sorry for such elementary questions..

Yes, I suppose you could call that a software RAID, however, being on modern hardware it's still pretty fast. Okay, I take your point about performance and hardware RAID should be better.

I was making the point that your O/S volumes might be UFS so creating a ZFS with these new drives would be "mixing it". Some purists just like a system to be all one or the other but if it gives you a quick result, so what.

If however you would like me to continue by answering your #3 post then I can do that.

They actually can't. Solaris 11 system disks are always on ZFS by design.

Yes, it seems that both my existing volumes are ZFS:

$ df -n
/                   : zfs
/devices            : devfs
/dev                : dev
/system/contract    : ctfs
/proc               : proc
/etc/mnttab         : mntfs
/system/volatile    : tmpfs
/system/object      : objfs
/etc/dfs/sharetab   : sharefs
/dev/fd             : fd
/var                : zfs
/tmp                : tmpfs
/rpool              : zfs
/rpool/export       : zfs
/rpool/export/home  : zfs
/rpool/export/home/username : zfs
/home/username        : lofs

hicksd8, I also think SW raid would not be that much slower.. Is it much simpler process in that case ?
In the meantime, I'll check if I'm able to login to ILOM..

They are undoubtedly ZFS as Solaris 11 doesn't support anything else anyway for its system disks.

Not sure about the T4-1 RAID controller performance but there is a common misconception than H/W RAID must be faster than S/W raid. Real-life tests seem to routinely demonstrate the opposite, although with your planned RAID-0, both should just be quite fast.
In any case, I would strongly recommend using ZFS for your disks, that would be at least one degree of magnitude simpler to setup and maintain than hardware raid. As DukeNuke2 already stated, a single command line can be enough:

zpool create data <disk1> <disk2>

Note that you won't have redundancy so no data self healing is possible in such a configuration.

Hi Jillarge,

And thanks for your quick reply.
The plan indeed is to have backup, and data redundancy on another machine.
This is supposed to just be a scratch disk (client's dataset is bigger than 100GB), that will be overwritten daily anyway.
I suppose if there is no great differences in hw/sw raid, I could choose the simpler for me to configure.

Regarding the disk configuration; there seem to be many instructions around the net that I need to use cfgadm -c, to do some initial configurations to a new disk.. is this still needed ?

And about the zpool, so "zpool create name disk1 disk2", would create a pool "name", that I could afterwards format, and create a filesystem on ?
Sorry for possibly repeating stuff, I just want to be sure the order of operation I need to perform..

zpool create name disk1 disk2

will create ZFS pool and ZFS filesystem and it will mount it under /name. You will be able to create additional ZFS filesystems using

zfs create name/new_fs_name

that will also be automatically mounted under /name/new_fs_name. If you would want to change mountpoint of any of those filesystems, simply use:

zfs set mountpoint=/other/mountpoint name
1 Like

cfgadm is for hot plugging devices. Assuming you install them while the server is powered off, if the new disks do not show up at reboot with iostat -En , just run devfsadm -v and they should.

You already did that with zpool create ... Should you need to create more than one file system later, better to create a specific one for your scratchpad as bartus11 already explained.

1 Like

Hi again,

So I inserted the disks to the machine, to next available slot in the each controller (HDD1 and HDD5).

Now, when booting, there is a complaint about "Corrupt label/wrong magic number", about both new disks, but apparently that's normal when disks have no labels..
So moving forward. I check with format, what it shows:

$ sudo format
Password:
Searching for disks...done

c0t5000CCA025A297C4d0: configured with capacity of 279.38GB
c0t5000CCA025A329E0d0: configured with capacity of 279.38GB


AVAILABLE DISK SELECTIONS:
       0. c0t5000CCA0259D2B78d0 <SUN300G cyl 46873 alt 2 hd 20 sec 625>  solaris
          /scsi_vhci/disk@g5000cca0259d2b78
          /dev/chassis//SYS/HDD0/disk
       1. c0t5000CCA025A297C4d0 <HITACHI-H106030SDSUN300G-A2B0 cyl 46873 alt 2 hd 20 sec 625>
          /scsi_vhci/disk@g5000cca025a297c4
          /dev/chassis//SYS/HDD1/disk
       2. c0t5000CCA0259C678Cd0 <SUN300G cyl 46873 alt 2 hd 20 sec 625>  solaris
          /scsi_vhci/disk@g5000cca0259c678c
          /dev/chassis//SYS/HDD4/disk
       3. c0t5000CCA025A329E0d0 <HITACHI-H106030SDSUN300G-A2B0 cyl 46873 alt 2 hd 20 sec 625>
          /scsi_vhci/disk@g5000cca025a329e0
          /dev/chassis//SYS/HDD5/disk

Seems all good here, so as per instructions (also found this: blog post), I see if I can create a zpool with the new drives (c0t5000CCA025A297C4d0 and c0t5000CCA025A329E0d0)

$ zpool create workarea c0t5000CCA025A297C4d0 c0t5000CCA025A329E0d0
cannot open 'c0t5000CCA025A297C4d0': no such device in /dev/dsk
must be a full path or shorthand device name

So, I go to the /dev/dsk to see what's going on. I check what the book disk (c0t5000CCA0259D2B78d0) looks like:

/dev/dsk$ ls -la | grep c0t5000CCA0259D2B78d0
lrwxrwxrwx   1 root     root          48 Aug  4 09:40 c0t5000CCA0259D2B78d0s0 -> ../../devices/scsi_vhci/disk@g5000cca0259d2b78:a
lrwxrwxrwx   1 root     root          48 Aug  4 09:40 c0t5000CCA0259D2B78d0s1 -> ../../devices/scsi_vhci/disk@g5000cca0259d2b78:b
lrwxrwxrwx   1 root     root          48 Aug  4 09:40 c0t5000CCA0259D2B78d0s2 -> ../../devices/scsi_vhci/disk@g5000cca0259d2b78:c
lrwxrwxrwx   1 root     root          48 Aug  4 09:40 c0t5000CCA0259D2B78d0s3 -> ../../devices/scsi_vhci/disk@g5000cca0259d2b78:d
lrwxrwxrwx   1 root     root          48 Aug  4 09:40 c0t5000CCA0259D2B78d0s4 -> ../../devices/scsi_vhci/disk@g5000cca0259d2b78:e
lrwxrwxrwx   1 root     root          48 Aug  4 09:40 c0t5000CCA0259D2B78d0s5 -> ../../devices/scsi_vhci/disk@g5000cca0259d2b78:f
lrwxrwxrwx   1 root     root          48 Aug  4 09:40 c0t5000CCA0259D2B78d0s6 -> ../../devices/scsi_vhci/disk@g5000cca0259d2b78:g
lrwxrwxrwx   1 root     root          48 Aug  4 09:40 c0t5000CCA0259D2B78d0s7 -> ../../devices/scsi_vhci/disk@g5000cca0259d2b78:h

What does those letters from a-h mean in the scsi part ? They seem to correspond to those last 2 digits sX (1-7)..
So, when creating the zpool, looks like I need to assign the drive with one of those sX numbers as well.. Can I just start from 0 ?
What are they for ?

Thanks again for your help!

---------- Post updated at 11:42 AM ---------- Previous update was at 10:50 AM ----------

D'oh, of course you have to be root to do that, so just "sudo" in front of the zpool command, and everything will go as in the movies :rolleyes: