Hardware RAID using three disks

Dear All ,

Pl find the below command ,

# raidctl -l
Controller: 1
        Volume:c1t0d0
        Disk: 0.0.0
        Disk: 0.1.0
        Disk: 0.3.0
# 
 raidctl -l c1t0d0
Volume                  Size    Stripe  Status   Cache  RAID
        Sub                     Size                    Level
                Disk
----------------------------------------------------------------
c1t0d0                  136.6G  N/A     OPTIMAL  OFF    RAID1
                0.0.0   136.6G          GOOD
                0.1.0   136.6G          GOOD
#

 format
Searching for disks...done


AVAILABLE DISK SELECTIONS:
       0. c1t0d0 <LSILOGIC-LogicalVolume-3000 cyl 65533 alt 2 hd 16 sec 273>
          /pci@0/pci@0/pci@2/scsi@0/sd@0,0
       1. c1t3d0 <SUN600G cyl 64986 alt 2 hd 27 sec 668>
          /pci@0/pci@0/pci@2/scsi@0/sd@3,0
Specify disk (enter its number): ^D

In the above output , i want to know how this RAID 1 volume has been created , whether it is created of two disks or three disks as because i find in "raidctl -l " command there are three disks present .

And also i want to do the patching on this server , any one tell me how to split this raid 1 volume and do the activity.

Rgds
Mj

The raid1 mirror has 2 disks; c1t0d0 and c1t1d0

Let me state a few facts just to ensure that you've got the principles here. Post back any questions on these points. It's important that you understand this.

  1. Your box has an integrated RAID controller (but regard it as a separate card).

  2. You interrogate/configure this controller using the raidctl command which can see all disks; mirrored or not.

  3. If you create a mirror (using raidctl), one of the disks disappears as far as the O/S is concerned. The RAID controller simply presents one disk to the O/S and takes care of the mirror copy to the other disk. Therefore, the other (hidden) disk is NOT visible to the format command (unless you unmirror it).

Therefore what you posted tells me that there are 2 disks in the mirror. The OPTIMAL means that it is healthy.

There is a third disk in the system (c1t3d0) which is not mirrored and just simply passed through to the O/S.

Therefore the format command sees c1t0d0 (the RAID1 mirror; really 2 disks) and the third disk (c1t3d0).

Sorry if you already knew that but just to be sure that you know what you're looking at.

Run

# mount

to see if any slice of c1t3d0 is in use.

You can also select this disk in the format command and print it's vtoc to see if it's even sliced (partitioned). Maybe there's nothing on this disk.

Why do you think that you should contemplate breaking the mirror in order to patch? Just take a backup (perhaps to c1t3d0 if it's not in use).

There's plenty of knowledge and help available on this forum. Just post your questions.

Hope that helps.

Sounds to me like anything on c1t3d0 is unprotected from hardware failure thought. I'd worry about that more than patching initially.

mount

is not enough.
Also check if the disk is in use by

swap -l
zpool status
metastat
metadb
2 Likes

@MadeInGermany........good point. Thanks. From a previous, but related, thread we know the system is ufs not zfs.

1 Like

Thanks all for your valuable inputs , shall come back again in case of any doubts....

---------- Post updated 01-17-15 at 12:36 AM ---------- Previous update was 01-16-15 at 05:34 AM ----------

Dear Hicks/All,

  1. When I executed the below command.
#mount 

The disk c1t3d0 is not in use , pl find the below outputs

 # cat /etc/mnttab
/dev/dsk/c1t0d0s0       /       ufs     rw,intr,largefiles,logging,xattr,onerror=panic,dev=800000       1395166360
/devices        /devices        devfs   dev=5680000     1395166347
ctfs    /system/contract        ctfs    dev=56c0001     1395166347
proc    /proc   proc    dev=5700000     1395166347
mnttab  /etc/mnttab     mntfs   dev=5740001     1395166347
swap    /etc/svc/volatile       tmpfs   xattr,dev=5780001       1395166347
objfs   /system/object  objfs   dev=57c0001     1395166347
sharefs /etc/dfs/sharetab       sharefs dev=5800001     1395166347
/platform/SUNW,SPARC-Enterprise-T5220/lib/libc_psr/libc_psr_hwcap2.so.1 /platform/sun4v/lib/libc_psr.so.1       lofs    dev=800000      1395166354
/platform/SUNW,SPARC-Enterprise-T5220/lib/sparcv9/libc_psr/libc_psr_hwcap2.so.1 /platform/sun4v/lib/sparcv9/libc_psr.so.1       lofs    dev=800000      1395166354
fd      /dev/fd fd      rw,dev=5980001  1395166360
swap    /tmp    tmpfs   xattr,dev=5780002       1395166361
swap    /var/run        tmpfs   xattr,dev=5780003       1395166361
/dev/dsk/c1t0d0s3       /opt    ufs     rw,intr,largefiles,logging,xattr,onerror=panic,dev=800003       1395166366
-hosts  /net    autofs  nosuid,indirect,ignore,nobrowse,dev=5a40001     1395166380
auto_home       /home   autofs  indirect,ignore,nobrowse,dev=5a40002    1395166380

So Kindly let me know whether this particular disk c1t3d0 is in use or not.

Because if it is in use , we can take the backup of this and do the patching.

  1. And also , on one another Server , I see the SVM is also configured..

/dev/md/dsk/d10 /infovista ufs rw,intr,largefiles,logging,xattr,onerror=panic,dev=154000a

So in this case ,what we should do after splitting the hardware mirror ,

Thanks and Regards ,
Rj

Please post the outputs of all the commands listed by MadeInGermany in post#4. The disk might be in use as swap space.

Also, if your filesystems are ufs, please see this thread

Backup root disks

and in particular my "EXTRA NOTE" on my post. It's important that you dump the snapshotted device.

---------- Post updated at 12:30 PM ---------- Previous update was at 12:20 PM ----------

Why do you still contemplate breaking any mirror in order to do patching? You may be thinking that you can keep an original unpatched copy in case something goes wrong but, generally, it is always more difficult to boot the second copy than you think in these circumstances.

Patching doesn't usually go wrong but, if it does, reinitialising a filesystem and restoring from backup (ufsdump in this case) is usually the easiest and quickest route in my experience.

Dear Hicks/Germany ,

Pl see the below output ,

 metadb
        flags           first blk       block count
     a m  pc luo        16              8192            /dev/dsk/c1t3d0s7
     a    pc luo        8208            8192            /dev/dsk/c1t3d0s7
root@  #
root@ # metastat
d10: Concat/Stripe
    Size: 1170211752 blocks (558 GB)
    Stripe 0:
        Device     Start Block  Dbase   Reloc
        c1t3d0s0          0     No      Yes

Device Relocation Information:
Device   Reloc  Device ID
c1t3d0   Yes    id1,sd@n5000cca025002fa8
root@ #
root@  # swap -l
swapfile             dev  swaplo blocks   free
/dev/dsk/c1t0d0s1   32,1      16 32768720 32768720
root@  #
root@ # zpool status
no pools available
root@  #

From the above output , the third disk c1t3d0 is used for SVM.

and also the swap is used by c1t0d0.....

So Kindly describe how to do the patching in case if we have hardware raid and also SVM in same server.Or any other ways to handle this.As there is no backup available for the above servers, I am going to split the hardware raid and do the patching. So how to take care of this third disk under SVM.

Rgds ,
Rj

Yes, c1t3d0 slice 0 is used by SVM for d10, but d10 is not used anywhere, so you don't need to do any extra steps for SVM.
I have no experience with this HW RAID; I would do

raidctl -l c1t3d0

to see if it maps to the Disk: 0.3.0

@jegaraman

so you're playing Russian roulette, eh?

I really hope that it's not a production server!!!!

Dear Hicks/Germany ,

There is one mount point for the d10 in SVM , it is as below.

/dev/md/dsk/d10 /dev/md/rdsk/d10 /infovista_coll , so do we have to take backups of this /infovista_coll and also , do we have to take any precautions in d10 or do we have to unmount for patching OR simple do the patching....

Pl let me know;