How to test RAID10 array performance [Debian Wheezy]?

I have created a RAID10 array (near layout) with four 8 GiB virtual hard drives, making a 16 GiB array (/dev/md0 formatted as ext4). On the other hand, I have a 16 GiB RAID1 array (/dev/md1, also formatted as ext4).
The purpose of these setups is to compare the read and write performances of each array.

So far, I have used dd to perform the following tests:

dd (sequential write performance):

    dd if=/dev/zero of=/dev/md0 count=512 bs=1024k
    512+0 records in
    512+0 records out
    536870912 bytes (537 MB) copied, 6.60623 s, 81.3 MB/s

    dd if=/dev/zero of=/dev/md1 count=512 bs=1024k
    512+0 records in
    512+0 records out
    536870912 bytes (537 MB) copied, 5.74421 s, 93.5 MB/s

dd (sequential read performance):

    dd if=/dev/md0 of=/dev/null bs=4096k
    4093+1 records in
    4093+1 records out
    17168334848 bytes (17 GB) copied, 168.665 s, 102 MB/s

    dd if=/dev/md1 of=/dev/null bs=4096k
    4093+1 records in
    4093+1 records out
    17170300928 bytes (17 GB) copied, 44.6421 s, 385 MB/s

So I changed the RAID10 array layout to f2, and sequential read performance improved but the write performance decreased compared to n2 (which was the previous layout):

    dd if=/dev/md0 of=/dev/null bs=4096k
    4093+0 records in
    4093+0 records out
    17167286272 bytes (17 GB) copied, 110.424 s, 155 MB/s

    dd if=/dev/zero of=/dev/md0 count=512 bs=1024k
    512+0 records in
    512+0 records out
    536870912 bytes (537 MB) copied, 6.84386 s, 78.4 MB/s

I was expecting that the RAID10 array would have a better read performance that that of RAID1, but it isn't the case as per these tests - even when I have performed them several times to avoid outliers. I have also used iozone for benchmarking the same setups, with similar results.

I am also aware that other factors may impact performance, such as the hardware used (perhaps virtual hard drives may not provide the best scenario?) and the filesystem.

That being said, what would be the best setup for a RAID10 array that will undergo more reads than writes? (In addition, I am looking for further reasons to use RAID10 instead of RAID1 other than the fault tolerance provided by the latter).

Any tips and ideas will be more than welcome.

Thanks in advance.

Hi Gacanepa,

You don't really provide enough information about the actual makeup of the disks, what is the actual backend you are using as this can make a significant difference. How are the devices accessed - there are a shedload more questions on this front.

In general and it is a generalisation the RAID10 (striped then mirrored) should provide the better performance (true also for RAID01) than RAID1. But it is not really possible to comment further than that without knowing a bit more information about your setup.

Regards

Dave

Dave,

First off, thank you for taking the time to reply to my post. Here are some other details that I hope will be helpful (if there is any other backend information that you need, please let me know):

-Both RAID arrays (1 & 10) were created in the same system.

-uname -a:

Linux debian 3.2.0-4-486 #1 Debian 3.2.60-1+deb7u3 i686 GNU/Linux

-From lshw:

     *-memory
          description: System memory
          physical id: 1
          size: 502MiB
     *-cpu
          product: AMD Athlon(tm) II X2 250 Processor
          vendor: Advanced Micro Devices [AMD]
          physical id: 2
          bus info: cpu@0
          version: 15.6.3
          size: 3GHz
          width: 32 bits

-mdadm --detail /dev/md0:

/dev/md0:
        Version : 1.2
  Creation Time : Wed Sep 24 10:02:18 2014
     Raid Level : raid10
     Array Size : 16764928 (15.99 GiB 17.17 GB)
  Used Dev Size : 8382464 (7.99 GiB 8.58 GB)
   Raid Devices : 4
  Total Devices : 5
    Persistence : Superblock is persistent

    Update Time : Wed Sep 24 10:04:56 2014
          State : active 
 Active Devices : 4
Working Devices : 5
 Failed Devices : 0
  Spare Devices : 1

         Layout : far=2
     Chunk Size : 512K

           Name : debian:0  (local to host debian)
           UUID : 36b8594f:e14b8caf:df8dedff:99320083
         Events : 18

    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       1       8       33        1      active sync   /dev/sdc1
       2       8       49        2      active sync   /dev/sdd1
       3       8       65        3      active sync   /dev/sde1

       4       8       81        -      spare   /dev/sdf1

-mdadm --detail /dev/md1:

/dev/md1:
        Version : 1.2
  Creation Time : Wed Sep 24 10:07:57 2014
     Raid Level : raid1
     Array Size : 16767872 (15.99 GiB 17.17 GB)
  Used Dev Size : 16767872 (15.99 GiB 17.17 GB)
   Raid Devices : 2
  Total Devices : 2
    Persistence : Superblock is persistent

    Update Time : Wed Sep 24 10:14:22 2014
          State : clean 
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

           Name : debian:1  (local to host debian)
           UUID : 98144751:72e778b7:dc92a91c:fffbfb43
         Events : 17

    Number   Major   Minor   RaidDevice State
       0       8       97        0      active sync   /dev/sdg1
       1       8      113        1      active sync   /dev/sdh1

Hi,

Can you also post the outpot of the following commands,

pvdisplay
lvdisplay
cat /etc/mdadm/mdadm.conf

Regards

Dave

Dave,
Thanks again!
As for pvdisplay and lvdisplay, I must mention here that these RAID arrays are not on LVMs (but here is the output of the commands anyway):

root@debian:~# pvdisplay
  --- Physical volume ---
  PV Name               /dev/sda5
  VG Name               debian
  PV Size               9.76 GiB / not usable 2.00 MiB
  Allocatable           yes (but full)
  PE Size               4.00 MiB
  Total PE              2498
  Free PE               0
  Allocated PE          2498
  PV UUID               0VU3Cr-2uLV-68Y8-W9hZ-uwav-APT7-or633a
   
root@debian:~# lvdisplay
  --- Logical volume ---
  LV Path                /dev/debian/root
  LV Name                root
  VG Name                debian
  LV UUID                Vuyq6i-Ctxr-nz4f-MIYi-x6fB-Fgif-Gjommv
  LV Write Access        read/write
  LV Creation host, time debian, 2014-08-12 21:27:22 -0300
  LV Status              available
  # open                 1
  LV Size                332.00 MiB
  Current LE             83
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           254:0
   
  --- Logical volume ---
  LV Path                /dev/debian/usr
  LV Name                usr
  VG Name                debian
  LV UUID                VIKvTb-Kl4V-l2Uw-K5Zp-vjjG-BCoP-MFO2d5
  LV Write Access        read/write
  LV Creation host, time debian, 2014-08-12 21:27:23 -0300
  LV Status              available
  # open                 1
  LV Size                3.41 GiB
  Current LE             874
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           254:2
   
  --- Logical volume ---
  LV Path                /dev/debian/var
  LV Name                var
  VG Name                debian
  LV UUID                LPUuFP-JX6Q-ShWJ-otNN-aBvu-DKNg-SxJfcv
  LV Write Access        read/write
  LV Creation host, time debian, 2014-08-12 21:27:23 -0300
  LV Status              available
  # open                 1
  LV Size                1.66 GiB
  Current LE             425
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           254:3
   
  --- Logical volume ---
  LV Path                /dev/debian/swap_1
  LV Name                swap_1
  VG Name                debian
  LV UUID                fLMGxe-ksTR-VNbM-rknH-G6OR-Aq1z-Q2RIwP
  LV Write Access        read/write
  LV Creation host, time debian, 2014-08-12 21:27:23 -0300
  LV Status              available
  # open                 2
  LV Size                560.00 MiB
  Current LE             140
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           254:1
   
  --- Logical volume ---
  LV Path                /dev/debian/tmp
  LV Name                tmp
  VG Name                debian
  LV UUID                BgKzko-pArn-LGq9-gLRG-xYFM-clc1-dJkzyN
  LV Write Access        read/write
  LV Creation host, time debian, 2014-08-12 21:27:23 -0300
  LV Status              available
  # open                 1
  LV Size                300.00 MiB
  Current LE             75
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           254:4
   
  --- Logical volume ---
  LV Path                /dev/debian/home
  LV Name                home
  VG Name                debian
  LV UUID                viKzZn-oOmC-JI88-itWv-26vw-eVoG-6RWWGL
  LV Write Access        read/write
  LV Creation host, time debian, 2014-08-12 21:27:23 -0300
  LV Status              available
  # open                 1
  LV Size                3.52 GiB
  Current LE             901
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           254:5
   
root@debian:~#

mdadm.conf (I just realize the definition for /dev/md1 is missing, but I am failing to understand how that would help to the understanding of why the RAID10, with its current configuration, is not performing as well as it should be)

# Please refer to mdadm.conf(5) for information about this file.
#

# by default (built-in), scan all partitions (/proc/partitions) and all
# containers for MD superblocks. alternatively, specify devices to scan, using
# wildcards if desired.
#DEVICE partitions containers

# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes

# automatically tag new arrays as belonging to the local system
HOMEHOST <system>

# instruct the monitoring daemon where to send mail alerts
MAILADDR root

# definitions of existing MD arrays
ARRAY /dev/md/0 metadata=1.2 UUID=d9a83766:cdec2926:53d85d4a:01f535ca name=debian:0

# This file was auto-generated on Wed, 10 Sep 2014 23:25:15 -0300
# by mkconf 3.2.5-5

Hi,

The missing entry in mdadm.conf is a problem - but probably not the cause of your issue. But to eliminate it you should update the file and re-run the test.

You can I think get the correct information by running,

mdadm --detail /dev/md/1

Also a quick question, how have you created the "virtual drives"?

Regarda

Dave

I created the virtual drives via VirtualBox wizard (select VM --> Settings --> Storage --> SATA --> Add new disk

Hi,

This will explain things, I think.

But it does give rise to a couple of other questions, the first being how many physical drives are there in the system. If the answer is less than five, you will see degradation in the read and write perfomance as it is likely that behind the hypervisor (in your case VirtualBox) there will be less paths than defined devices.

The second question is are you in fact running VirtualBox on a Physical or within a Virtual Server.

Regards

Dave

1) I have only one physical drive in the system.
2) I am running VirtualBox on a Physical server (host).

Hi,

Unfortunately having only a single drive here is going to skew these figures, to test this properly you'll need multiple physical drives.

There are multiple impacts here - mostly from the hard drive, the seek time and latency will be major contributors as virtual box only creates a contiguous file if you specify that it should preallocate all the disk space. Other than that it will scatter the writes all over the disk if you specify that it should grow the file as required.

I don't think that this is really a suitable way to evaluate the performance of any RAID as it's all on one physical disk.

Regards

Dave

I agree with you. I actually gave it a try because I didn't have any other setup available to test. But in theory (leaving VirtualBox aside, and if I had the actual physical drives available), reads should be ~twice as fast, and writes about as fast as RAID1, am I correct?

Hi,

The RAID10 performance will be faster, but I'm not sure it will be twice as fast. Generally better performance is related to an increase on the number of spindles. There are also benefits to be derived from the committal regime but there can be a certain amount of trial and error, there are a great many factors that influence the performance of a RAID array - that's why there are a great many books and courses on the subject.

Regards

Dave

That depends on what the bottleneck is in whatever configuration you're testing, and how you're testing it.

Hi.

Here is a report on several disk configurations that was done in 2009 using benchmark code bonnie++ (32-bit 2-CPU Athlon box):

Description-en: Hard drive benchmark suite.
 It is called Bonnie++ because it was based on the Bonnie program.  This
 program also tests performance with creating large numbers of files.

which can be found in the Debian repositories.

Currently I use RAID10 with 4 SATA disks with LVM on top of the RAID on a virtual-machine server. I have not run the bonnie++ benchmarks on that system, but perhaps I will now that I see that there is some interest.

The only reason I would use RAID10 with less than 4 disks is practice (as with setup, etc), not for performance.

Best wishes ... cheers, drl