cfgadm - is this an error?

# cfgadm -al
Ap_Id Type Receptacle Occupant Condition
c0 scsi-bus connected configured unknown
c0::dsk/c0t0d0 CD-ROM connected configured unknown
c1 scsi-bus connected configured unknown
c1::dsk/c1t0d0 disk connected configured unknown
c1::dsk/c1t2d0 disk connected configured unknown
c1::dsk/c1t3d0 disk connected configured unknown
c2 scsi-bus connected unconfigured unknown
c4 fc-private connected configured unknown
c4::216000c0ff80540a disk connected configured unknown
c4::216000c0ff90540a disk connected configured unknown
c5 fc-private connected configured unknown
c5::266000c0ffe0540a disk connected configured unusable
c5::266000c0fff0540a disk connected configured unknown

NOTE the one listed as unusable...
why is this?
do we have a faulty fiber card?

It looks like a faulty port on a dual port card, or depending on the card type there might be no modulator/ an unseated module in that port. The former is more likely.

Just confirm if the device is in use or not. If it has been removed, then unconfigure the LUNS that were removed from the SAN

we have this setup as an MPXIO system.
about 10 LUN's

we had syslog messages with wording of

"REPORT LUN to D_ID=0x9d lun=0x0 failed: State:Timeout, Reason:Hardware Error. Giving up"

and

"mpxio: [ID 669396 kern.info] /scsi_vhci/ssd@g600c0ff00000000000540a5d5eb56309 (ssd26) multipath status: degraded, path /pci@1d,700000/SUNW,qlc@1/fp@0,0 (fp1) to target address: 266000c0ffe0540a,7 is offline. Load balancing: round-robin"

What happens when you try
luxadm -e forcelip /dev/cfg/c5

no response on command line.
but we get new logs in /var/adm/messages

Jun 26 19:14:13 svr_name qlc: [ID 630585 kern.info] NOTICE: Qlogic qlc(1): Loop OFFLINE
Jun 26 19:14:13 svr_name qlc: [ID 630585 kern.info] NOTICE: Qlogic qlc(1): Loop ONLINE

cfgadm status now? Is it still showing as unusable?

output?
cfgadm -alo show_SCSI_LUN

cfgadm -alo show_FCP_dev

update:

tested a few other commands...

 
# luxadm -e port

Found path to 2 HBA ports

/devices/pci@1c,600000/SUNW,qlc@1/fp@0,0:devctl                    CONNECTED
/devices/pci@1d,700000/SUNW,qlc@1/fp@0,0:devctl                    CONNECTED
# 
# luxadm display /dev/rdsk/c3t600C0FF00000000000540A5D5EB56306d0s2
DEVICE PROPERTIES for disk: /dev/rdsk/c3t600C0FF00000000000540A5D5EB56306d0s2
  Vendor:               SUN     
  Product ID:           StorEdge 3510   
  Revision:             415G
  Serial Num:           00540A5D5EB5
  Unformatted capacity: 10240.000 MBytes
  Write Cache:          Enabled
  Read Cache:           Enabled
    Minimum prefetch:   0x0
    Maximum prefetch:   0xffff
  Device Type:          Disk device
  Path(s):

  /dev/rdsk/c3t600C0FF00000000000540A5D5EB56306d0s2
  /devices/scsi_vhci/ssd@g600c0ff00000000000540a5d5eb56306:c,raw
   Controller           /devices/pci@1c,600000/SUNW,qlc@1/fp@0,0
    Device Address              216000c0ff90540a,6
    Host controller port WWN    210000e08b10c772
    Class                       primary
    State                       ONLINE
   Controller           /devices/pci@1d,700000/SUNW,qlc@1/fp@0,0
    Device Address              266000c0ffe0540a,6
    Host controller port WWN    210000e08b0f24b6
    Class                       primary
    State                       OFFLINE

# 

check fibre cables, all OK.

getting HBA card replaced in a few hours.

---------- Post updated at 10:57 PM ---------- Previous update was at 05:45 PM ----------

and now we can only see one card connected :frowning:

my thoughts is that we have to put the new WWN's into the disk array.

I will look into this after some sleep.