Empty ZFS SAN file system with high read I/O using MPXIO

I am researching the cause of an issue. The SAN file system /export/pools/zd-xxxxxxxxxxx is having a high amount of read traffic even though it is empty. It is ZFS with MPXIO. Any ideas? It's really strange considering the file system is empty, and I don't see any errors.

 
     cpu
 us sy wt id
  1  2  0 97
                    extended device statistics
    r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
    0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 lofi1
    0.8    0.0   16.7    0.0  0.0  0.0    0.0    7.2   0   1 c1t2d0
   88.9    0.0  915.0    0.0  0.0  0.4    0.0    4.0   0  15 c1t0d0
    0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 c0t0d0
    0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 c3t204300A0B85637A0d31
    0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 c4t201200A0B85637A0d31
    0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 c6t600A0B8000338556000004F24A6DD8B2d0
    0.2    6.4   12.8  127.6  0.0  1.3    0.0  198.0   0  39 c6t600A0B8000338556000004F04A6DD891d0
    0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 c6t600A0B8000338556000004EE4A6DD873d0
    0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 c6t600A0B8000338556000004EC4A6DD855d0
    0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 c6t600A0B8000338556000004EA4A6DD7C3d0
    0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 c6t600A0B8000338556000004E84A6DD75Fd0
    0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 c6t600A0B8000338556000004E64A6DD6E8d0
    0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 c6t600A0B8000338556000004E44A6DD65Ad0
    0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 c6t600A0B8000338556000004E24A6DD4FEd0
    0.2    0.0    0.9    0.0  0.0  0.0    0.0    0.2   0   0 c6t600A0B80003385560000047D4A66F733d0
  158.7   13.4 11836.0  527.9  0.0  6.5    0.0   38.0   0  99 c6t600A0B8000338556000004804A66F782d0
    0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 c6t600A0B8000338556000004824A66F86Bd0
    0.0    0.2    0.0    7.2  0.0  0.0    0.0   45.1   0   1 c6t600A0B80003385560000060F4C94C963d0
  135.5   15.4 11292.9  692.6  0.0  6.9    0.0   45.4   0  97 c6t600A0B80003385560000060D4C94C947d0
    0.0    0.2    0.0    7.2  0.0  0.1    0.0  306.0   0   6 c6t600A0B80003385560000060B4C94C931d0
    0.6    5.2   38.4  124.0  0.0  1.1    0.0  188.6   0  33 c6t600A0B8000338556000006094C94C91Bd0
    0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 c6t600A0B8000338556000006074C94C901d0
    0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 c6t600A0B8000338556000006054C94C8E3d0
    0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 c6t600A0B80005637A00000061C4C98AF1Bd0
    0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 c6t600A0B8000563766000006614C98AF62d0
    0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 xxxxx:/export/zones/nfs_servd
    0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 xxxxx:/export/zones/nfs_servd1
    0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 xxxxx:/export/zones/nfs_servd1
    0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 xxxxx:/export/zones/nfs_servd
 
root@serverXXXX# zpool status zd-xxxxxxxxxxx
  pool: zd-xxxxxxxxxxx
 state: ONLINE
 scrub: none requested
config:
        NAME                                     STATE     READ WRITE CKSUM
        zd-xxxxxxxxxxx                          ONLINE       0     0     0
          c6t600A0B8000338556000004804A66F782d0  ONLINE       0     0     0
          c6t600A0B80003385560000060D4C94C947d0  ONLINE       0     0     0
errors: No known data errors
 
root@serverXXXX# zfs list zd-xxxxxxxxxxx
NAME              USED  AVAIL  REFER  MOUNTPOINT
zd-xxxxxxxxxxx  25.8G  72.2G    18K  /export/pools/zd-xxxxxxxxxxx
 
root@serverXXXX# df -h .
Filesystem             size   used  avail capacity  Mounted on
zd-xxxxxxxxxxx         98G    18K    72G     1%    /export/pools/zd-xxxxxxxxxxx
 
root@serverXXXX# luxadm display /dev/rdsk/c6t600A0B8000338556000004804A66F782d0s2
DEVICE PROPERTIES for disk: /dev/rdsk/c6t600A0B8000338556000004804A66F782d0s2
  Vendor:               SUN
  Product ID:           CSM200_R
  Revision:             0760
  Serial Num:           SG82421613
  Unformatted capacity: 51200.000 MBytes
  Write Cache:          Enabled
  Read Cache:           Enabled
    Minimum prefetch:   0x3
    Maximum prefetch:   0x0
  Device Type:          Disk device
  Path(s):
  /dev/rdsk/c6t600A0B8000338556000004804A66F782d0s2
  /devices/scsi_vhci/ssd@g600a0b8000338556000004804a66f782:c,raw
   Controller           /devices/pci@400/pci@0/pci@d/SUNW,emlxs@0,1/fp@0,0
    Device Address              201200a0b833854a,1
    Host controller port WWN    100xxxxxxxxxxx
    Class                       secondary
    State                       STANDBY
   Controller           /devices/pci@500/pci@0/pci@c/SUNW,emlxs@0/fp@0,0
    Device Address              204300a0b833854a,1
    Host controller port WWN    100xxxxxxxxxxx
    Class                       primary
    State                       ONLINE
root@serverXXXX# luxadm display /dev/rdsk/c6t600A0B80003385560000060D4C94C947d0s2
DEVICE PROPERTIES for disk: /dev/rdsk/c6t600A0B80003385560000060D4C94C947d0s2
  Vendor:               SUN
  Product ID:           CSM200_R
  Revision:             0760
  Serial Num:           SG82421610
  Unformatted capacity: 51200.000 MBytes
  Write Cache:          Enabled
  Read Cache:           Enabled
    Minimum prefetch:   0x3
    Maximum prefetch:   0x0
  Device Type:          Disk device
  Path(s):
  /dev/rdsk/c6t600A0B80003385560000060D4C94C947d0s2
  /devices/scsi_vhci/ssd@g600a0b80003385560000060d4c94c947:c,raw
   Controller           /devices/pci@400/pci@0/pci@d/SUNW,emlxs@0,1/fp@0,0
    Device Address              201200a0b833854a,10
    Host controller port WWN    100yyyyyyyyyyyyy
    Class                       secondary
    State                       ONLINE
   Controller           /devices/pci@500/pci@0/pci@c/SUNW,emlxs@0/fp@0,0
    Device Address              204300a0b833854a,10
    Host controller port WWN    100xxxxxxxxxxx
    Class                       primary
    State                       STANDBY
 
root@serverXXXX# luxadm -e port
/devices/pci@400/pci@0/pci@d/SUNW,emlxs@0/fp@0,0:devctl            NOT CONNECTED
/devices/pci@400/pci@0/pci@d/SUNW,emlxs@0,1/fp@0,0:devctl          CONNECTED
/devices/pci@500/pci@0/pci@c/SUNW,emlxs@0/fp@0,0:devctl            CONNECTED
/devices/pci@500/pci@0/pci@c/SUNW,emlxs@0,1/fp@0,0:devctl          NOT CONNECTED
 
root@serverXXXX# cd /export/pools/zd-xxxxxxxxxxx
root@serverXXXX# ls -la
total 6
drwxr-xr-x   2 root     root           2 Jan 18 02:26 .
drwxr-xr-x   9 root     root           9 Nov 18  2010 ..
root@serverXXXX#

Try dtrace, there is a"canned" module that will help - assuming Solaris 10

dtrace -s  /usr/demo/dtrace  iotime.d | awk '/ R/ && /devname/'

Where devname is the device name you want
You can see if there is a process blasting away at a file. You may want to hack a local copy of the iotime.d script to print the process PID(s).

On second thought - try the canned whoio.d script, it prints more like what you need.

Assuming the problem is a single process or a bunch of process LWP's.