Hi,
Currently we have a Sun Fire 480R running Solaris 9 and Oracle 9.2.0.8. The server is fibre attached to a NetApp FAS3070. Two separate 100GB LUNs are presented to the server.
The two LUNs are mounted as the file systems data and logs for the Oracle database. We are seeing high I/O on both these 'disks' during database queries. We have a comparable setup at another site which does not have the same problems. Below is the output of iostat -xPnce
It may be difficult to make out but the kr/s is high along with asvc_t. The %b is averaging about 80% all the time. I have read somewhere that anything over 5% is bad.
extended device statistics ---- errors ---
r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b s/w h/w trn tot device
29.4 1.2 337.6 13.0 0.0 1.0 0.0 34.0 0 72 0 0 0 0 c2t2d1s2
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0 0 0 0 c2t2d1s7
32.8 0.8 5993.2 6.4 0.0 1.4 0.0 41.9 0 86 0 0 0 0 c2t2d2s2
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0 0 0 0 c2t2d2s7
25.2 0.4 289.6 3.2 0.0 0.9 0.0 36.9 0 67 0 0 0 0 c3t2d1s2
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0 0 0 0 c3t2d1s7
30.0 1.0 6707.6 9.8 0.0 1.4 0.1 46.0 0 80 0 0 0 0 c3t2d2s2
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0 0 0 0 c3t2d2s7
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0 0 0 0 c1t0d0s0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0 0 0 0 c1t0d0s1
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0 0 0 0 c1t0d0s2
0.0 0.6 0.0 14.6 0.0 0.0 0.0 10.7 0 0 0 0 0 0 c1t0d0s3
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0 0 0 0 c1t0d0s4
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0 0 0 0 c1t0d0s7
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0 0 0 0 c1t1d0s0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0 0 0 0 c1t1d0s1
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0 0 0 0 c1t1d0s2
0.0 0.6 0.0 14.6 0.0 0.0 0.0 14.3 0 1 0 0 0 0 c1t1d0s3
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0 0 0 0 c1t1d0s4
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0 0 0 0 c1t1d0s5
I need to know what I can check or change on the server (Unix or Oracle) or the filer to try to improve performance? Thanks for your help.