Please help...my HBA's won't configure

Hi,

I have been given the task of configuring new SAN disks on two Solaris 10 boxes. The boxes in question have two dual channel QLogic FC cards, one port on each goes to an existing HDS SAN and are managed via HDLM. The intention is to connect the two free ports to a NetApp SAN and then migrate the data.
The main problem is that I can't configure the devices with "cfgadm -c configure c4", when I run the command it doesn't error, and there's nothing in dmesg, it just doesn't do anything!? I have tried re-configuration reboots, reloading the qlc driver, etc.
I really am stumped!
One thing that did cross my mind is that possibly HDLM somehow getting in the way?? However there are luns already mounted which aren't under HDLM control.

Thanks in advance

Rich

Are you using the Sun-supplied qlc driver, or the QLogic-supplied qla driver?

what does fcinfo hba-port or luxadm -e port shows?

The driver is the Sun qlc driver, which I assume is OK as there is one port on each card already in use. Unfortunately fcinfo isn't installed, but the results from "luxadm -e port" show the following:

/devices/pci@8,600000/SUNW,qlc@2/fp@0,0:devctl CONNECTED
/devices/pci@9,600000/SUNW,qlc@1/fp@0,0:devctl CONNECTED
/devices/pci@9,600000/SUNW,qlc@1,1/fp@0,0:devctl CONNECTED
/devices/pci@9,600000/SUNW,qlc@2/fp@0,0:devctl CONNECTED
/devices/pci@9,600000/SUNW,qlc@2,1/fp@0,0:devctl CONNECTED

Thanks for your responses so far.

provide showrev , cfgadm -al and cat /etc/release

showrev:

Hostname: ipcc-ndbs-001
Hostid: 841e75f7
Release: 5.10
Kernel architecture: sun4u
Application architecture: sparc
Hardware provider: Sun_Microsystems
Domain:
Kernel version: SunOS 5.20 Generic_118822-10

cfgadm -al:

App_Id Type Receptacle Occupant Condition
PCI0 unknown empty unconfigured unknown
PCI1 unknown empty unconfigured unknown
PCI2 unknown empty unconfigured unknown
PCI3 unknown empty unconfigured unknown
PCI4 pci-pci/hp connected configured ok
PCI5 pci-pci/hp connected configured ok
PCI6 pci-pci/hp connected configured ok
PCI7 mult/hp connected configured ok
PCI8 mult/hp connected configured ok
c0 scsi-bus connected configured ok
c0::dsk/c0t0d0 CD-ROM connected configured ok
c1 fc-private connected configured unknown
c1::21000014c3cf742f disk connected configured unknown
c1::21000014c3cf7493 disk connected configured unknown
c1::21000014c3cf76fb disk connected configured unknown
c1::21000014c3cf7af8 disk connected configured unknown
c1::50800200002273c1 disk connected configured unknown
c2 fc-fabric connected unconfigured unknown
c2::210000e08b89a094 unknown connected unconfigured unknown
c2::500a098887099920 unknown connected unconfigured unknown
c2::500a098897099920 unknown connected unconfigured unknown
c3 fc-fabric connected configured unknown
c3::50060e8004f28a2a disk connected configured unknown
c4 fc-fabric connected unconfigured unknown
c4::500a098587099920 unknown connected unconfigured unknown
c4::500a098597099920 unknown connected unconfigured unknown
c5 fc-fabric connected configured unknown
c5::50060e8004f27c3a disk connected configured unknown
c5::50060e8004528a4a disk connected configured unknown
usb0/1 unknown empty unconfigured ok
usb0/2 unknown empty unconfigured ok
usb0/3 unknown empty unconfigured ok
usb0/4 unknown empty unconfigured ok

cat /etc/release:

    Solaris 19 3/05 HW1 s10s\_wos_74L2a SPARC
Copyright 2005 Sun Microsystems, Inc. All Rights Reserved
        Use is subject to license terms.
            Assembled 14 July 2005

Thanks
Rich

Can you please update your kernel patch cluster to the latest first and revert back after that? meantime, I want to know what h/w you're using.. your Solaris 10 release is a kinda weird..

Hi,
I believe these boxes to be V880�s, but they�re in a data centre 200 miles away! They�re also Live production boxes running a rather important Oracle 10g cluster, so as you can probably understand, I�m a little hesitant to do anything too drastic which may break what�s currently working.
Primarily I�m a Linux admin, whose been drafted in to help out with this migration�as seemingly there�s no one else in the company with any kind of unix skills!
I�ll endeavour to get the kernel patch cluster to the latest�.please can you try to give some confidence that in doing so won�t break what�s already working!?

Many thanks for your time it is very much appreciated!

Rich

a serious opinon:
please go and get professional help! this is not a thing that should be done remote via guesses... your installation is realy old (the first solaris 10 release!) and it looks like there hasn't been much update/patch work.

good luck!
DN2

Is mpathadm installed on the server? If so, start using that to see what your system sees on the storage.

I suspect you're using Solaris native multipathing, and there's a chance your version doesn't recognize your new storage.

DN2,

I think you're right! In theory, all I should have needed to do was:
(I'd be grateful if you could sanity check my logic)

  1. run "cfgadm -c configure c2" and "cfgadm -c configure c2"
  2. add - name="fp" parent="/pci@9,600000/SUNqlc@1" port=0 mpxio-disable="no";
    and name="fp" parent="/pci@9,600000/SUNqlc@2" port=0 mpxio-disable="no"; to /kernel/drv/fp.conf
  3. reboot -- -r
  4. use format to partition and freehog the disks
  5. Create the character devices for Oracle.

I'm feeling way out of my comfort zone at the moment and am thinking I should tell the managment to bring in a Solaris consultant.
If it only it was Red Hat... I wouldn't be here :wink:

Cheers
Rich

---------- Post updated at 02:56 PM ---------- Previous update was at 02:54 PM ----------

achenle,

mpathadm isn't installed, I was intending to use native mpxio

Thanks

Rich

mpathadm is in a later solaris release...

I mentioned mpathadm because I suspect native mpxio isn't recognizing the EMC array as multipath-capable:

# mpathadm show mpath-support libmpscsi_vhci.so

The mpathadm utility is used to view and administer mpxio. The man page for the latest version is here:

3.Administering Multipathing Devices Through mpathadm Commands (Solaris Express SAN Configuration and Multipathing Guide) - Sun Microsystems

Solaris native mpxio relies on the vendor and product strings returned from the disk array to work, and it's possible to add entries to the scsi_vhci.conf file to handle additional array types that do not have built-in support. That capability has evolved significantly over the lifetime of mpxio, though, and I'd bet it's rudimentary at best in the initial releases of mpxio and Solaris 10, if it exists at all.

Of course, if you start making modifications to scsi-vhci.conf and adding new arrays for mpxio to use, you're probably well outside the bounds of supported activity.

You should check if later versions of Solaris 10 support the array in question, though. Then you're just an upgrade away from support.