How to destroy hardware raid on T5120

Hi,

I have problem creating hardware raid on T5120 with 4 disks. After the hardware raid 1 created, then I used the raidctl -l c1t0d0 and raidctl -l c1t2d0
the output of volume c1t0d0 contain disk 0.0.0 0.1.0, also the volume c1t2d0 contain disk 0.0.0 0.1.0 and should be 0.2.0 0.3.0

so I destroy both volume and then i issued command raidctl, the out shows controller 1 with disk 0.0.0 0.1.0 0.2.0 0.3.0, I thought output should be no raid volume found.

Anyone knows how to complete destroy the hardware.

Thanks for your help!

have you booted from cd/dvd or from internal disk?

Yes. I booted to single user mode from DVD. Then created the raid 1 from the four 146GB disks. Here is the output:

# format
Searching for disks...done


AVAILABLE DISK SELECTIONS:
       0. c1t0d0 <SUN146G cyl 14087 alt 2 hd 24 sec 848>
          /pci@0/pci@0/pci@2/scsi@0/sd@0,0
       1. c1t1d0 <SUN146G cyl 14087 alt 2 hd 24 sec 848>
          /pci@0/pci@0/pci@2/scsi@0/sd@1,0
       2. c1t2d0 <SUN146G cyl 14087 alt 2 hd 24 sec 848>
          /pci@0/pci@0/pci@2/scsi@0/sd@2,0
       3. c1t3d0 <SUN146G cyl 14087 alt 2 hd 24 sec 848>
          /pci@0/pci@0/pci@2/scsi@0/sd@3,0

# raidctl -l c1t0d0
Volume                  Size    Stripe  Status   Cache  RAID
        Sub                     Size                    Level
                Disk
----------------------------------------------------------------
c1t0d0                  136.6G  N/A     SYNC     N/A    RAID1
                0.0.0   136.6G          GOOD
                0.1.0   136.6G          GOOD
# raidctl -l c1t2d0
Volume                  Size    Stripe  Status   Cache  RAID
        Sub                     Size                    Level
                Disk
----------------------------------------------------------------
c1t2d0                  136.6G  N/A     DEGRADED N/A    RAID1
                0.0.0   136.6G          GOOD
                0.1.0   136.6G          GOOD

so my question is how come the volume c1t2d0 disk list same as volume c1t0d0. I thought that volume c1t2d0 disk list should be 0.2.0 and 0.3.0

Thanks!

what is the output of "raidctl -l" without any further options?

Here is the output of raidctl -l

# raidctl -l
Controller: 1
Volume:c1t0d0
Volume:c1t2d0
Disk: 0.0.0
Disk: 0.0.0
Disk: 0.0.0
Disk: 0.0.0
......
then this output Disk: 0.0.0 keep printing on screen

Thanks!

looks like there is something slightly wrong :D. try to give the disks a new label with "format -e". also check the firmware for the mashine and the raid-controller. in case you are downrev, please update.

to destroy , use:-
raidctl -d c1t0d0
raidctl -d c1t2d0

example:
# raidctl -d c0t0d0
RAID Volume 'c0t0d0' deleted

Yes, I have already used -d option to destroy the both raid volumes. Then after try to re-created w/o any luck. I thought after I deleted both raid volumes then the output of raidctl should say "No raid volume found" but instead the output like:

Controller 1:
list of four disks

thanks!

Did you ever do a reboot after the delete?

{0} ok setenv auto-boot? false
auto-boot? = * * * * * *false
{0} ok setenv fcode-debug? true
fcode-debug? = * * * * *true
{0} ok reset-all

T5140, No Keyboard
Copyright 2008 Sun Microsystems, Inc. *All rights reserved.
OpenBoot 4.28.8, 32544 MB memory available, Serial #71711454.
Ethernet address 0:14:4f:46:3a:de, Host ID: 84463ade.

{0} ok
{0} ok show-disks
a) /pci@400/pci@0/pci@8/scsi@0/disk
b) /pci@400/pci@0/pci@1/pci@0/usb@0,2/storage@2/disk
q) NO SELECTION
Enter Selection, q to quit: q

{0} ok select /pci@400/pci@0/pci@8/scsi@0/

{0} ok show-volumes ( Look for the inactive volume)
{0} ok X dactivate-volume (X is the volume number)
{0} ok unselect-dev

When you have completed managing the RAID volumes you have to set the auto-boot? and fcode-debug? variables back and reset the system:

{1} ok setenv auto-boot? true
auto-boot? = * * * * * *true
{1} ok setenv fcode-debug? false
fcode-debug? = * * * * *false
{1} ok reset-all

I did follow your instructions, when I did show-volumes and output says "no volumes to show". The format output shows four disks.

the raidctl output as following:
Controller: 1
Disk: 0.0.0
Disk: 0.1.0
Disk: 0.2.0
Disk: 0.3.0
Then when I try to create the raid again, somehow the output of
raid volume c1t0d0 contain disk 0.0.0 and 0.1.0
raid volume c1t2d0 also contain disk 0.0.0 and 0.1.0, should be 0.2.0 and 0.3.0

Anything else I can try?

Thanks for your help!

Did you try a reboot after all?

show cfgadm -al output?

yes, the cfgadm -al output looks fine compare to other server.
I just talked to SUN support and they said there is bug and fixed in recent kernel patch.

Thanks very much for your help in this case, Incredible!

What kernel patch did you apply?

-loudawg

Another method from the ok> prompt
Fearthepenguin.net Unconfigure T5240 HW Raid controller from ok prompt