HW Raid poor io performance

Hello all

We just built a storage cluster for our new xenserver farm. Using 3ware 9650SE raid controllers with 8 x 1TB WD sata disks in a raid 5, 256KB stripe size.

While making first performance test on the local storage server using dd (which simulates the read/write access to the disk mostly equal as the iscsi target does it later) we see very strange performance values.

Using the default dd (with the hardware reported block size of 512bytes) directly on the device (/dev/sdb) gives around 44MB/s write performance.

Using dd with a 1MB blocksize (bs=1M) gives around 587MB/s write performance.

Also the partition alignment makes huge diffrences between 28MB/s and 250MB/s (by 512byte blocksize).

The values are all the same using diffrent linux distros: CentOS, Fedora 13, Ubuntu, SLES.

I know it must have something to do with the stripe size and scheduler settings such as queue_depth and nr_requests, etc. But I can't see the relation between all this settings.

Is there a crack who can give me a little help getting this done? It would be very helpful especially that we work on this issue more than two weeks, read all the available documentations to this topics and the people from 3ware couln't help us yet.

Thanks in advance.

Roland Kaeser

Hai Roland. Is there a problem? What is your problem? It is normal that you get higher throughput with large block sizes. 44 MB/s sequential write performance with a 512 byte block size directly to a device (no write combining) seems pretty decent to me. Is it much larger than the write cache of the controller? What were you expecting?