ISCSI poor performance 1.5MB/s fresh install AIX7.1

Hi Everyone,

I have been struggling for few days with iSCSI and thought I could get some help on the forum...

fresh install of AIX7.1 TL4 on Power 710, The rootvg relies on 3 SAS disks in RAID 0, 32GB Memory
The lpar Profile is using all of the managed system's resources.
I have connected only 1 physical gigabit ports on the AIX box to my iscsi target server (RHEL6 with tgt iscsi)
The network seems ok, a ftp transfer between AIX & RHEL6 is 125Mb/s
However the iSCSI transfer has really poor performance : if I do a "dd if=/dev/hdisk1 of=/dev/null" from my AIX box (on the iscsi lun), I only get 1.5Mb/s !

(Please note that initially, I wanted to setup one LPAR as iscsi target, and one lpar as iscsi initiator, both using virtual ethernet adapter, and I got the same performance 1.5Mb/s between the 2 AIX lpar ... Then I decided to investigate further and to setup the whole system as a iSCSI initiator and using one of my RHEL6 iscsi target that I know works well.)

I also tried to setup the AIX as iSCSI target and the RHEL6 as initiator, the performance gets better 8MB/s but still not acceptable, I would at least expect 30-40MB/s

Any idea guys ? please solve my 1 week headache.

This probably has to do with your block size. The transfer is limited by the number of (small size, 512-byte) IO's rather than the Megabytes..

What happens when you do

dd if=/dev/hdisk1 of=/dev/null bs=1024k

?

---
And also, what happens when you run two such dd's simultaneously?

Thanks for your help Scrutinizer.

" dd if=/dev/hdisk1 of=/dev/null bs=1024k " doesn't change anything ( also tried with 256k 512k)

running two dd command simultaneously give the same result :

---
one dd command :

Network    BPS  I-Pkts  O-Pkts    B-In   B-Out
Total    8.34M   5.94K   3.96K   8.05M    302K

two dd command simultaneously :

Network    BPS  I-Pkts  O-Pkts    B-In   B-Out
Total    8.36M   5.95K   3.97K   8.06M    302K

---

For info, a ftp from the linux iscsi target to AIX:

226 Transfer complete.
2310809600 bytes sent in 37.3 secs (61991.35 Kbytes/sec)
ftp>

the other way around gives the same result.

any other idea ?

---------- Post updated at 07:11 PM ---------- Previous update was at 06:22 PM ----------

For information, if I now setup the AIX as iSCSI target, and a linux as initiator :

--on linux iSCSI initiator :

dd if=/dev/sdb of=/dev/null
^C62697+0 records in
62696+0 records out
32100352 bytes (32 MB) copied, 16.6651 s, 1.9 MB/s

--topas on AIX iSCSI target :

Network    BPS  I-Pkts  O-Pkts    B-In   B-Out 
Total    1.66M   606.5   74.00   27.8K   1.63M

Note that my disk access on the AIX is pretty good, as I have 3 SAS disk on RAID 0, I hit pretty easily the 180MB/s, so that is not the issue here ...

iSCSI performance can be greatly affected by a number of factors.

One of the major ones is the maximum payload configured. Turning on jumbo packets if both adapters support it is usually a good start.

There are other BIOS settings on FC adapters that also affect performance.
What adapters are you using?

Also, there may be the usual I/O system settings of write-thru vs write-back to consider.

A quick search of the IBM bible yielded this:
IBM Knowledge Center Error

-Thanks for your answer, I understand that jumbo frames can increase performance, but still with normal frames I should be able to have at least 20-30MB/s
-I don't understand the relation with FC adapter, since iSCSI is over the network, and my disks are local disks (3 SAS disks in RAID0)
-I will investigate the I/O system settings...

Yes, of course, FC was a typo, I meant network adapters.

I assume you have Gigabit adapters? What are they?

I have 4 gigabit Ethernet adapters, the embedded ones that come with the P710.
I am using only one of them. as I said I am able to ftp at 125MB/s from RHEL6 to AIX, same network,same card, just the iscsi protocol seems to have bottleneck somewhere...
the CPU activity is almost 99% idle. I have plenty of RAM.

Is there a network switch in between?

Have you tried a direct crossover cable to eliminate the switch?

I have tested initially 2 lpar on the same p710 using virtual ethernet adapter and have the same result 1.5MB/s. There was no switch between,just using the hypervisor
1 lpar aix 7.1 as target.the other aix 7.1 lpar as initiator.
It seems in my config, aix target has limitation is 8MB/s and the initiator has a limitation of 1.5MB/s. That is not acceptable. I must change a parameter somewhere to have acceptable results but don't know what...

I don't know if this can be useful

Thanks for helping, I had already tested your recommendation but still the same.
I still think it is a limitation on AIX (a magic parameter to change?)
To prove it, I have now narrowed down this issue, I have installed RHEL 6.5 ppc64 on the p710 and set it up as iscsi initiator, the other RHEL 6.5 x86_64 physical (not a VM) as iscsi target
I have now really good iscsi transfer rate ! :

[root@localhost ~]# dd if=/dev/sdb of=/dev/null bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 9.90109 s, 108 MB/s
[root@localhost ~]# dd if=/dev/sdb of=/dev/null
^C1633537+0 records in
1633536+0 records out
836370432 bytes (836 MB) copied, 10.6675 s, 78.4 MB/s

[root@localhost ~]#

Any idea ? any magic parameter on AIX to change that may help ?

I think I got my answer, I was simply dd if=/dev/hdiskX ... and I should have done dd if=/dev/rhdiskX ... using the raw device instead of the block device increased the instate result significantly !
Although on linux I was using the block device, I didn't get such difference with the raw device that's why it was confusing.

Thanks for your help guys.

1 Like