---------- Post updated at 06:38 AM ---------- Previous update was at 03:05 AM ----------
Maybe the properties of the VHBA and the scsi disk can affect the performance. Or the vdbench cause this problem.
I don't know what are the possible reasons. Who can give me some suggestions to resolve this problem. Thanks!!
---------- Post updated at 06:40 AM ---------- Previous update was at 06:38 AM ----------
Maybe the properties of the VHBA and the scsi disk can affect the performance. Or the vdbench cause this problem.
I don't know what are the possible reasons. Who can give me some suggestions to resolve this problem. Thanks!!
the result with running vdbench with 1MB IO :
Jun 22, 2011 interval i/o MB/sec bytes read resp resp resp cpu% cpu%
rate 1024**2 i/o pct time max stddev sys+usr sys
16:06:21.052 31 782.00 782.00 1048576 100.00 149.715 161.195 0.493 4.5 4.3
16:06:22.051 32 781.00 781.00 1048576 100.00 149.697 161.233 0.475 4.5 4.2
16:06:23.051 33 781.00 781.00 1048576 100.00 148.282 154.836 2.734 4.7 4.3
The io rate is always very bad ,especial with 512 IO size.
Now I doubt the DMA property and buf struct in scsi_init_pkt function. But I didn't understand these fully.
The max io rate can reach 7k while the io rate with our VHBA is only 1.7k when running vdbench with 512 bytes IO.
That is the problem. But I don't know why.
How are your doing your IO? Via a file system or direct to the device? If direct, are you using the raw device? (rdsk vs. dsk)
Also, if you use just one thread, how does the VHBA vs non-VHBA performance look? Maybe you're having threads contending for resources in your VHBA code?