Hard disk write performance very slow

Dear All,

I have a hard disk in solaris on which the write performanc is too slow.

The CPU , RAM memory are absolutely fine.

What might be reason.

Kindly explain.

Rj

Hello

Have you checked the output from an iostat -xnC 1 ?
Just to see what the device(s) are actually doing in terms of I/O, percentages busy etc?

Might give you a pointer on if its the device which is flat out, if not then what's the configuration of your system?

Hi ....

Pl find the output

root@IPTOBIS-DB-UAT # iostat -Xtc 5 2
                 extended device statistics                    tty         cpu
device    r/s    w/s   kr/s   kw/s wait actv  svc_t  %w  %b  tin tout  us sy wt id
md0       0.0    0.1    2.1    1.3  0.0  0.0   37.4   0   0    0   13   2  0  0 98
md1       0.0    0.0    0.0    0.5  0.0  0.0   30.5   0   0
md3       0.0    0.4    0.5    0.6  0.0  0.0   39.6   0   0
md4       1.0    2.8   54.2   23.5  0.0  0.1   15.6   1   4
md5       0.1   13.6    3.5  108.7  0.0  0.2   11.4   0   9
md6       1.9    0.6  107.4    4.5  0.0  0.0    8.4   0   1
md7       1.2    8.0   57.2   66.7  0.0  0.2   20.3   0  11
md8       0.5    2.1   23.8   19.0  0.0  0.1   20.2   0   5
md9       0.0    0.8    0.5    8.0  0.0  0.0   27.5   0   2
md10      0.0    0.1    1.1    1.3  0.0  0.0   28.6   0   0
md11      0.0    0.0    0.0    0.5  0.0  0.0   34.3   0   0
md13      0.0    0.4    0.2    0.6  0.0  0.0   23.2   0   0
md14      0.5    2.8   27.1   23.5  0.0  0.0   10.4   0   3
md15      0.0   13.6    1.7  108.7  0.0  0.1    8.6   0   7
md16      1.0    0.6   53.8    4.5  0.0  0.0    7.4   0   1
md20      0.0    0.1    1.1    1.3  0.0  0.0   27.9   0   0
md21      0.0    0.0    0.0    0.5  0.0  0.0   33.6   0   0
md23      0.0    0.4    0.2    0.6  0.0  0.0   23.1   0   0
md24      0.5    2.8   27.1   23.5  0.0  0.0   10.5   0   3
md25      0.0   13.6    1.7  108.7  0.0  0.1    8.5   0   7
md26      1.0    0.6   53.6    4.5  0.0  0.0    7.6   0   1
md110     0.0    0.1    2.3    6.3  0.0  0.0   56.6   0   0
md120     0.1    0.1    8.3    5.8  0.0  0.0   91.5   0   1
sd0       1.9   18.4   84.0  139.5  0.0  0.2    9.7   0  11
sd1       1.9   18.4   83.7  139.5  0.0  0.2    9.7   0  11
sd2       0.0    0.0    0.0    0.0  0.0  0.0    0.0   0   0
sd5       8.2   10.6   59.8   87.0  0.0  0.1    6.7   0   8
sd6       9.2   12.7   67.8  103.5  0.0  0.1    6.6   0   9
sd7      10.2   14.6   75.5  119.4  0.0  0.2    7.2   0  10
sd8       9.2   12.7   67.7  103.3  0.0  0.2    7.3   0   9
nfs1      0.0    0.0    0.0    0.0  0.0  0.0    1.0   0   0
                 extended device statistics                    tty         cpu
device    r/s    w/s   kr/s   kw/s wait actv  svc_t  %w  %b  tin tout  us sy wt id
md0       0.2    6.6    1.6    5.4  0.1  0.3   60.5   9   9    6  422   6  1  0 94
md1      67.0    0.0  536.0    0.0  0.0  1.1   15.8   0  99
md3       0.0    3.6    0.0    2.1  0.0  0.2   54.9   3   4
md4       0.2   51.8    1.6  414.4  0.1  1.0   20.3   5  97
md5       0.2   70.4    1.6  563.2  0.0  1.2   17.2   0  99
md6       0.4    0.4  203.2    3.2  0.0  0.0   47.1   0   2
md7       4.8   33.2  305.6  265.6  0.0  0.6   16.0   0  46
md8       0.0    0.4    0.0    0.3  0.0  0.0   24.7   0   1
md9       0.0    0.0    0.0    0.0  0.0  0.0    0.0   0   0
md10      0.2    6.6    1.6    5.4  0.0  0.3   42.9   0   7
md11     33.4    0.0  267.2    0.0  0.0  0.5   14.2   0  47
md13      0.0    3.6    0.0    2.1  0.0  0.1   34.6   0   4
md14      0.0   51.8    0.0  414.4  0.0  0.7   13.0   0  65
md15      0.2   70.4    1.6  563.2  0.0  0.9   12.5   0  73
md16      0.2    0.4  184.0    3.2  0.0  0.0   27.6   0   1
md20      0.0    6.6    0.0    5.4  0.0  0.2   30.9   0   6
md21     33.6    0.0  268.8    0.0  0.0  0.6   17.5   0  55
md23      0.0    3.6    0.0    2.1  0.0  0.1   31.0   0   3
md24      0.2   51.6    1.6  412.8  0.0  0.7   13.1   0  67
md25      0.0   70.4    0.0  563.2  0.0  0.8   11.6   0  72
md26      0.2    0.4   19.2    3.2  0.0  0.0   33.2   0   1
md110     0.0    0.0    0.0    0.0  0.0  0.0    0.0   0   0
md120     0.0    0.0    0.0    0.0  0.0  0.0    0.0   0   0
sd0      35.4  136.6  454.4  990.2  0.0  2.6   15.4   0  96
sd1      34.0  136.4  289.6  988.6  0.0  2.5   14.9   0  98
sd2       0.0    0.0    0.0    0.0  0.0  0.0    0.0   0   0
sd5      27.0   34.8  216.0  287.1  0.0  0.4    5.9   0  24
sd6      25.6   31.8  203.3  259.4  0.0  0.2    4.4   0  22
sd7      24.6   30.0  195.4  244.7  0.0  0.3    6.1   0  23
sd8      28.2   37.6  222.7  304.4  0.0  0.4    6.2   0  25
nfs1      0.0    0.0    0.0    0.0  0.0  0.0    0.0   0   0

Any info you found out on this.....

Regards
JeganR

________________________________________

you should maybe post your RAID configuration...

# metastat -p

Hi Duke,

Pl see the metastat output....

root@IPTOBIS-DB-UAT # metastat -c
d6               m   47GB d16 d26
   d16          s   47GB c1t0d0s6
   d26          s   47GB c1t1d0s6
d5               m  5.0GB d15 d25
   d15          s  5.0GB c1t0d0s5
   d25          s  5.0GB c1t1d0s5
d4               m   38GB d14 d24
   d14          s   38GB c1t0d0s4
   d24          s   38GB c1t1d0s4
d3               m   10GB d13 d23
   d13          s   10GB c1t0d0s3
   d23          s   10GB c1t1d0s3
d0               m   20GB d10 d20
   d10          s   20GB c1t0d0s0
   d20          s   20GB c1t1d0s0
d1               m   16GB d11 d21
   d11          s   16GB c1t0d0s1
   d21          s   16GB c1t1d0s1
d120             r   60GB c1t2d0s5 c1t3d0s5 c1t4d0s5 c1t5d0s5
d7               r   99GB c1t2d0s0 c1t3d0s0 c1t4d0s0 c1t5d0s0
d110             r   99GB c1t2d0s4 c1t3d0s4 c1t4d0s4 c1t5d0s4
d8               r   99GB c1t2d0s1 c1t3d0s1 c1t4d0s1 c1t5d0s1
d9               r   49GB c1t2d0s3 c1t3d0s3 c1t4d0s3 c1t5d0s3

ok, and which is the device you have problems with?

The issue is with d7 d8 d9.,,,

if you have parallel reads and/or writes on d7, d8 and d9 they can get slow because they are all on the same physical disks. as d110 and d120 are also... so you have 5 metadevices on 4 physical disks which are in a raid5. i don't know what kind of traffic is on your system but using more devices on different controllers would/could be an enhancement for the i/o performance of your filesystems.

I'm not familiar offhand with the output from "iostat -Xtc 5 2" - I usually run something like "iostat -sndzx 2" to get IO to actual physical LUNs, but assuming that sd5, sd6, sd7, and sd8 correspond to the hard drives in d7, d8, and d9, they're sustaining 60 or 70 IO ops/sec and showing about 25% busy in the second output set from the iostat command.

What kind of drives are they? What kind of IO are you doing? Random? Large streaming reads/writes?

Hi Duke,

At last i got a mail from Sun saying that we need to increase the interlace size value for this RAID 5 volume .

But already a RAID 5 volume is present with less interlace size.

Kindly tell me the steps that we need to take before recreating the interlace and recreate RAID 5.

Thanks...

---------- Post updated at 11:47 PM ---------- Previous update was at 11:33 PM ----------

I need to increase the interlace size , according to SUN. How can we find out the device where we have issues and also pl tell me the precautions that we need to take before recreating raid5 and resizing interlace size