raid 0+1 +oracle getting slow data reads.

howdy,

Issue: Slow db data access. Oracle 7.3.4
OS: Sol8 +current patches
Hardware:
Model: E3600
CPU: (4) 336 US-II
Memory: 2048
Disks Arrays: (2) D1000 (each on its own io card)
Drives: (5/per disk array)Cheetah 9LP
Capacity:9.1 GB
Speed:10000 rpm
Average Read Time: 5.4ms
FS: UFS
Raid: 0+1

Ok to break the disks down this is how it is layed out:

# Creates one stripe across 4 disks with 128k stripe width
metainit d101 1 4 c1t1d0s0 c1t2d0s0 c1t8d0s0 c1t9d0s0 -i 128k

# Creates one stripe across 4 disks with 128k stripe width
metainit d102 1 4 c2t1d0s0 c2t2d0s0 c2t8d0s0 c2t9d0s0 -i 128k

# Mirrors the newly created striped volumes
metainit d100 -m d101 d102

# Creates a filesystem of 70652925 blocks (the entire volume) on the new volume
mkfs -F ufs /dev/md/rdsk/d100 70652925

# Add to /etc/vfstab
/dev/md/dsk/d100 /dev/md/rdsk/d100 /lv01 ufs 2 yes la
rgefiles,logging

There is never any waite time on the d101 or d102 but the wait time comes from the mirror which i think is slowing down the db r/w. Has this happend to anyone? Any ideas on how i can implament mirroring a bit more optimized or any alternatives to disk redundency w/ the current hardware? I think the huge amount of faults is from the r/w to d100. the avg amount of SYS faults is about 200-300 when the db is turned off. any help would be appricated.

VMSTAT (db off)
---------------
procs     memory            page            disk          faults      cpu
 r b w   swap  free  si  so pi po fr de sr m1 m1 m1 m2   in   sy   cs us sy id
 0 0 0 2566232 1532960 0  0  0  0  0  0  0  0  0  0  0  709  233  435  0  0 100
 0 0 0 2566232 1532960 0  0  0  0  0  0  0  9  9  9  0  680  191  328  0  1 99
 0 0 0 2566232 1532960 0  0  0  0  0  0  0  0  0  0  0  722  244  472  0  0 100
 0 0 0 2566232 1532960 0  0  0  0  0  0  0  0  0  0  0  632  198  307  0  0 100
 0 0 0 2566232 1532960 0  0  0  0  0  0  0  0  0  0  0  719  256  448  0  0 100
 0 0 0 2566232 1532960 0  0  0  0  0  0  0  0  0  0  0  636  351  298  0  0 100
 0 0 0 2566328 1532912 0  0  0  0  0  0  0  0  0  0  0  710  449  447  0  0 99
 0 0 0 2566640 1533080 0  0  0  0  0  0  0  0  0  0  0  641  247  297  0  0 100
 0 0 0 2566640 1533080 0  0  0  0  0  0  0  0  0  0  0  702  282  414  0  1 99
 0 0 0 2566848 1533288 0  0  0  0  0  0  0  0  0  0  0  621  196  277  0  0 100
 0 0 0 2566848 1533288 0  0  0  0  0  0  0  0  0  0  0  708  233  422  0  0 100
 0 0 0 2566848 1533288 0  0  0  0  0  0  0  0  0  0  0  626  181  271  0  0 100
 0 0 0 2566848 1533288 0  0  0  0  0  0  0  0  0  0  0  705  244  437  0  0 100
 0 0 0 2566848 1533288 0  0  0  0  0  0  0  0  0  0  0  624  191  266  0  1 99
 0 0 0 2566848 1533288 0  0  0  0  0  0  0  0  0  0  0  709  227  422  0  0 100
 0 0 0 2566848 1533288 0  0  0  0  0  0  0  0  0  0  0  621  184  273  0  0 100
 0 0 0 2566848 1533288 0  0  0  0  0  0  0 11 11 11  0  766  249  467  0  1 99
 0 0 0 2566848 1533288 0  0  0  0  0  0  0  0  0  0  0  626  191  279  0  0 100
 0 0 0 2566848 1533288 0  0  0  0  0  0  0  0  0  0  0  718  261  442  0  0 100


VMSTAT (db on)
------
procs     memory            page            disk          faults      cpu
 r b w   swap  free  si  so pi po fr de sr m1 m1 m1 m2   in   sy   cs us sy id
 0 0 0 2230728 1293048 0  0 1013 0 0  0  0  2  2  4  0 1725 8502 1838 12  4 84

 procs     memory            page            disk          faults      cpu
 r b w   swap  free  si  so pi po fr de sr m1 m1 m1 m2   in   sy   cs us sy id
 0 0 0 1968392 1002600 0  0 60  0  0  0  0  0  0  0  0 1630 11978 1870 14 4 82
 0 1 0 1968392 1002784 0  0 64  0  0  0  0 17 17 17  0 1610 8593 1924 10  5 84
 0 1 0 1968392 1003176 0  0 44  0  0  0  0  0  0  0  0 1501 7437 1724 25  2 72
 0 1 0 1969616 1004624 0  0 52  0  0  0  0  0  0  0  0 1480 7190 1768 12  3 85
 0 1 0 1969616 1004560 0  0 52  0  0  0  0  0  0  0  0 1477 7879 1747  9  3 88
 0 1 0 1969616 1004440 0  0 40  0  0  0  0  0  0  0  0 1490 10292 1812 13 3 84
 0 0 0 1969368 1004136 0  0 64  0  0  0  0  0  0  0  0 1374 9334 1655 10  4 85
 0 1 0 1969392 1004104 0  0 48  0  0  0  0  0  0  0  0 1467 7607 1795  8  3 89
 0 0 0 1969392 1003848 0  0 60  0  0  0  0  0  0  0  0 1408 7123 1688  8  3 89
 0 0 0 1969600 1003936 0  0 112 0  0  0  0  0  0  0  0 1478 7420 1795 10  3 86
 0 1 0 1969600 1003880 0  0 96  0  0  0  0  0  0  0  0 1384 8641 1583 21  3 76
 0 1 0 1969600 1003808 0  0 88  0  0  0  0  0  0  0  0 1450 11136 1747 12 5 83
 0 0 0 1969600 1003736 0  0 92  0  0  0  0  0  0  0  0 1318 6942 1526  8  2 90
 0 1 0 1969600 1003592 0  0 64  0  0  0  0  0  0  0  0 1521 7547 1864  8  3 89
 0 1 0 1969600 1003496 0  0 96  0  0  0  0  0  0  0  0 1398 7539 1670  9  3 87
 0 1 0 1969856 1003552 0  0 60  4  0  0  0  0  0  0  0 1392 5949 1702  7  3 90
 0 0 0 1970120 1003544 0  0 96  0  0  0  0 12 12 12  0 1439 11012 1663 14 5 81
 0 2 0 1970120 1003424 0  0 76  0  0  0  0  0  0  0  0 1447 7258 1752  8  2 90
 0 1 0 1970120 1003200 0  0 136 0  0  0  0  0  0  0  0 1420 7491 1669  8  4 88
 procs     memory            page            disk          faults      cpu
 r b w   swap  free  si  so pi po fr de sr m1 m1 m1 m2   in   sy   cs us sy id
 0 1 0 1970120 1004048 0  0 1200 0 0  0  0  0  0  0  0 1587 7442 1897  9  2 89
 0 0 0 1970072 1004744 0  0 136 0  0  0  0  0  0  0  0 1532 7453 1798 28  5 67
 0 1 0 1970072 1004560 0  0 132 0  0  0  0  0  0  0  0 1549 12202 1903 16 4 80
 0 2 0 1970704 1004888 0  0 140 0  0  0  0  0  0  0  0 1568 7905 1931  8  2 90
 0 2 0 1970704 1004768 0  0 112 0  0  0  0  0  0  0  0 1433 6897 1746  7  2 92
 0 1 0 1970704 1004712 0  0 120 0  0  0  0  0  0  0  0 1354 6887 1568  8  3 89
 0 1 0 1970704 1004864 0  0 1504 0 0  0  0  0  0  0  0 1428 7389 1649 16  3 81
 0 2 0 1970704 1006592 0  0 272 0  0  0  0  0  0  0  0 1330 9447 1518 13  4 83
 0 2 0 1970384 1006152 0  0 232 0  0  0  0  0  0  0  0 1467 8473 1801 10  3 88
 0 0 0 1970072 1005936 0  0 124 0  0  0  0  0  0  0  0 1395 7050 1648  8  3 89
 0 1 0 1970072 1005720 0  0 108 0  0  0  0  0  0  0  0 1555 7466 1947  8  3 89
 0 0 0 1970072 1005528 0  0 148 0  0  0  0  0  0  0  0 1510 8143 1843  9  3 88
 0 1 0 1970704 1005856 0  0 140 0  0  0  0 12 12 12  0 1608 7829 1982  8  5 87
 0 0 0 1970704 1005744 0  0 212 0  0  0  0  0  0  0  0 1918 12833 2175 17 5 78
 0 0 0 1970664 1005712 0  0 172 0  0  0  0  0  0  0  0 2500 11230 2846 8  5 86
 0 1 0 1970664 1005576 0  0 236 0  0  0  0  0  0  0  0 2474 11241 2787 11 4 85
 0 1 0 1970664 1005472 0  0 176 0  0  0  0  0  0  0  0 2052 9557 2367  9  3 87
 0 0 0 1970456 1005120 0  0 204 0  0  0  0  0  0  0  0 1353 6934 1616  7  4 88
 0 0 0 1970240 1004808 0  0 220 0  0  0  0  0  0  0  0 1500 11362 1804 14 4 82



IOSTAT (-Cxnz)
-------------
 r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
   11.5  142.0  632.0  911.7  0.0  1.2    0.0    7.7   0 106 c1
   10.5  142.0  476.0  911.8  0.0  1.2    0.0    7.7   0 106 c2
   15.0  142.0 1108.0  911.7  0.0  1.4    0.0    9.1   0  97 d100
    7.5  142.0  632.0  911.7  0.0  1.2    0.0    7.8   0  86 d101
    7.5  142.0  476.0  911.7  0.0  1.2    0.0    7.7   0  85 d102
    3.0   32.5  220.0  260.0  0.0  0.3    0.0    7.3   0  23 c1t1d0
    2.5   60.0  120.0  278.2  0.0  0.5    0.0    7.9   0  45 c1t2d0
    3.0   29.0  196.0  209.5  0.0  0.2    0.0    7.6   0  21 c1t8d0
    3.0   20.5   96.0  164.0  0.0  0.2    0.0    8.0   0  17 c1t9d0
    2.0   32.5   88.0  260.0  0.0  0.2    0.0    7.1   0  21 c2t1d0
    2.5   60.0  112.0  278.3  0.0  0.5    0.0    8.1   0  47 c2t2d0
    3.5   29.0   56.0  209.5  0.0  0.3    0.0    7.7   0  22 c2t8d0
    2.5   20.5  220.0  164.0  0.0  0.2    0.0    7.5   0  17 c2t9d0
                    extended device statistics
    r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
   13.5  146.0  752.1  902.3  0.0  1.3    0.0    8.0   0 115 c1
   13.0  142.0  580.1  900.3  0.0  1.2    0.0    7.5   0 106 c2
   16.5  141.0 1332.1  899.8  0.0  1.4    0.2    9.0   3  97 d100
    8.0  141.0  752.1  899.8  0.0  1.2    0.0    7.8   0  86 d101
    8.5  141.0  580.1  899.8  0.0  1.1    0.0    7.5   0  84 d102
    0.0    1.0    0.0    0.5  0.0  0.0    0.0    7.5   0   1 c1t0d0
    4.0   32.0  216.0  241.0  0.0  0.3    0.0    8.1   0  26 c1t1d0
    3.0   25.5  200.0  189.0  0.0  0.3    0.0    8.9   0  23 c1t2d0
    3.0   60.5  204.0  255.8  0.0  0.5    0.0    7.7   0  44 c1t8d0
    3.5   27.0  132.0  216.0  0.0  0.2    0.0    7.8   0  21 c1t9d0
    0.0    1.0    0.0    0.5  0.0  0.0    0.0    6.6   0   1 c2t0d0
    3.0   30.0   56.0  240.0  0.0  0.2    0.0    6.4   0  20 c2t1d0
    4.0   23.5  176.0  188.0  0.0  0.2    0.0    8.3   0  20 c2t2d0
    3.0   60.5  160.0  255.8  0.0  0.5    0.0    8.0   0  46 c2t8d0
    3.0   27.0  188.0  216.0  0.0  0.2    0.0    6.9   0  19 c2t9d0
                    extended device statistics
    r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
    8.5  147.5  428.0  898.7  0.0  1.2    0.0    7.5   0 107 c1
    6.5  146.0  288.0  901.7  0.0  1.0    0.0    6.9   0  97 c2
    0.0    0.5    0.0    4.0  0.0  0.0    0.0   15.1   0   1 d30
    0.0    0.5    0.0    4.0  0.0  0.0    0.0   10.4   0   1 d31
    0.0    0.5    0.0    4.0  0.0  0.0    0.0   15.1   0   1 d32
   10.5  145.0  716.0  893.7  0.0  1.3    0.1    8.3   2  95 d100
    5.5  145.0  428.0  893.7  0.0  1.1    0.0    7.3   0  85 d101
    5.0  145.5  288.0  897.7  0.0  1.0    0.0    6.9   0  81 d102
    0.0    0.5    0.0    4.0  0.0  0.0    0.0   15.0   0   1 c1t0d0
    2.0   23.0  104.0  176.5  0.0  0.2    0.0    7.6   0  17 c1t1d0
    1.5   35.5  108.0  276.5  0.0  0.3    0.0    7.1   0  25 c1t2d0
    2.0   62.0   60.0  229.7  0.0  0.5    0.0    7.6   0  45 c1t8d0
    3.0   26.5  156.0  212.0  0.0  0.2    0.0    7.5   0  20 c1t9d0
    0.0    0.5    0.0    4.0  0.0  0.0    0.0   10.4   0   1 c2t0d0
    1.5   22.0  116.0  176.0  0.0  0.2    0.0    6.6   0  14 c2t1d0
    1.5   34.5   16.0  276.0  0.0  0.2    0.0    6.4   0  21 c2t2d0
    1.5   62.5   72.0  233.7  0.0  0.4    0.0    6.9   0  42 c2t8d0
    2.0   26.5   84.0  212.0  0.0  0.2    0.0    7.6   0  19 c2t9d0
                    extended device statistics
    r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
    6.5  140.0  372.0  872.9  0.0  1.1    0.0    7.5   0  99 c1
    7.5  138.0  656.0  864.4  0.0  1.1    0.0    7.4   0  98 c2
   13.5  138.5 1027.9  868.5  0.0  1.4    0.1    8.9   2  97 d100
    6.5  139.0  372.0  872.5  0.0  1.1    0.0    7.4   0  84 d101
    7.0  138.0  656.0  864.5  0.0  1.1    0.0    7.4   0  82 d102
    3.5   26.0  152.0  204.2  0.0  0.2    0.0    6.8   0  18 c1t1d0
    0.5   20.5   64.0  160.2  0.0  0.1    0.0    6.7   0  14 c1t2d0
    1.5   62.5   76.0  260.5  0.0  0.5    0.0    7.6   0  46 c1t8d0
    1.0   31.0   80.0  248.0  0.0  0.3    0.0    8.4   0  22 c1t9d0
    1.5   25.5   72.0  204.0  0.0  0.2    0.0    6.1   0  16 c2t1d0
    2.0   20.0  252.0  160.0  0.0  0.2    0.0    7.2   0  15 c2t2d0
    2.5   61.5  200.0  252.5  0.0  0.5    0.0    7.7   0  45 c2t8d0
    1.5   31.0  132.0  248.0  0.0  0.3    0.0    7.9   0  22 c2t9d0

iostat -xne

is that a question or a statement?

But to answer your declaritve question (heh)..
no errors ever.

First, you should be working with your Oracle DBA. If you and the DBA don't work together then you are wasting your time.

Now that I'm off the soap box,...

Check out the following info docs at Sunsolve:

Persistent Write contention How to tune (examples with Oracle)
IPC parameters - ora.init examples

Be aware your version of Oracle isn't supported anymore (except with an extened contract) and you won't be able to use it if you upgrade to Solaris 9 (
Compatibility matrix ).

Another factor may be your choice of DiskSuite over Veritas. Only opinion but I've never seen Oracle without Veritas on Sun - it's just not worth it when dealing with Oracle.

Can you post your /etc/system file? There should be changes made for the Oracle processes.
You should also be looking at the Oracle log files for errors - it may be that there are changes on the Oracle side that really need to be done. Also, what exactly is on the one disk that you are showing as having a problem? How many different databases are there (on that disk and all)? Are the logging files on different drives than the database(s)? Who is running what inside Oracle when you see these problems (sorts inside Oracle will kill a server)?

set shmsys:shminfo_shmmax=2147483648
set shmsys:shminfo_shmmin=1
set shmsys:shminfo_shmmni=100
set shmsys:shminfo_shmseg=10
set semsys:seminfo_semmns=830
set semsys:seminfo_semmni=100
set semsys:seminfo_semmsl=210
set semsys:seminfo_semopm=100
set semsys:seminfo_semvmx=32767
* Begin MDD root info (do not edit)
forceload: misc/md_trans
forceload: misc/md_raid
forceload: misc/md_hotspares
forceload: misc/md_stripe
forceload: misc/md_mirror
forceload: drv/sbus
forceload: drv/isp
forceload: drv/sd
rootdev:/pseudo/md@0:0,12,blk
* End MDD root info (do not edit)
* Begin MDD database info (do not edit)
set md:mddb_bootlist1="sd:14:16 sd:14:1050 sd:14:2084 sd:22:16 sd:22:1050"
set md:mddb_bootlist2="sd:22:2084 sd:62:16 sd:62:1050 sd:62:2084 sd:70:16"
set md:mddb_bootlist3="sd:70:1050 sd:70:2084 sd:6:16 sd:6:1050 sd:126:16"
set md:mddb_bootlist4="sd:126:1050 sd:134:16 sd:134:1050 sd:134:2084"
set md:mddb_bootlist5="sd:142:16 sd:142:1050 sd:142:2084 sd:182:16"
set md:mddb_bootlist6="sd:182:1050 sd:182:2084 sd:190:16 sd:190:1050"
set md:mddb_bootlist7="sd:190:2084"
* End MDD database info (do not edit)

unfortunetly our oracle dba is pretty demanding that it is not the db but the system.

the dba said he is not seeing any errors in his logs.
there is only 1 db running. all users connect remotely thru a software package (i2 which is a transportaion software suite)

i cant disagree but i cant agree either.

i truly wish i could just test differant parm's but alas it is a prod box that we were forced to use the current versions of software. and hardware does not look like a fesable idea in my future.

How long has this server been set up this way (current versions of OS and Oracle)?

Have you had sar collecting information on the server before this?
Were there any old statitics that you could refer to - compare original to what you are seeing now (whether it be from sar or xxstat commands) - if you can't compare you have a harder time showing what is really wrong.

Post the init.ora file (should be able to find it in a directory below $ORACLE_HOME). I'll ask our DBA (who does work with us on problems here) what he thinks of all of this and what else you can look for to see if you have to put your DBA in his/her place.

For you to fix the problem you need to know what you can look at to help solve it (from both the OS and Oracle sides).

Quick Oracle overview

Tuning Oracle especially: Identfy Possible Bottlenecks

You show no errors in /var/adm/messages or metastat?

ill post the init.ora file in the morning when i get back in to work.

also the first post has all the os and app software verisons listed.

no errors at all.

just like the first post said. no prob w/ system when the db is down. only when the db is up. i think it might have something to do w/ the mirroring.

please read my comments at the bottom of my first post for my theory.

The following lines are those that the DBA ask me to put
in a box with 4 Gb of RAM:

set maxusers=2048
set msgsys:msginfo_msgmax=16384
set msgsys:msginfo_msgmnb=16384
set msgsys:msginfo_msgmni=2200
set msgsys:msginfo_msgtql=2500
set semsys:seminfo_semmni=100
set semsys:seminfo_semmns=2500
set semsys:seminfo_semmnu=2500
set semsys:seminfo_semmsl=300
set semsys:seminfo_semopm=100
set semsys:seminfo_semume=2500
set shmsys:shminfo_shmmax=3865470566
set shmsys:shminfo_shmmin=1
set shmsys:shminfo_shmseg=1024

Note: seminfo_semvmx in 32767 is the default.

Only for information.

Regards. Hugo.

Optimus_P, I wasn't asking for the versions - I know you posted them - I was asking how long the server has been set up this way (sorry about the misunderstanding).

You have the mirror and drives set up correctly as long as any redo or archive logs are not on them ( see Databasejournal.com - file layout for SQL and Eng Auburn EDU - Oracle disk layout - this is a good example of how to do it)

There never is!

And if Oracle isn't set up properly or the multitude of variables that an Oracle DBA could change to effect performance (for both good and bad results) could cause this.

You really need to be able to work with the DBA - or learn enough about Oracle to be able to say for certain it isn't on that side.

The machine has been setup and running for about two weeks.

Unfortunetly here at my work. they dont understand the importance of testing or a well planned upgrade stratigy.

These are my 4 steps to do over the weekend w/ this machine as far as fs tuening goes.

1) add the following entries to the /etc/system file (to enable larger cluster sizes then 128k)
*
* Allow larger SCSI I/O transfers, parameter is bytes (i can prolly scale this down to 512)
*
set maxphys = 1048576

*
* Allow larger DiskSuite I/O transfers, parameter is bytes (i can prolly scale this down to 512)
*
set md_maxphys = 1048576

2) set new cluster size to 512 for the FS (/dev/md/dsk/d100)
#RAID level 0, striping - Cluster size = number of stripe members x interlace size
tunefs -a 64 /dev/md/dsk/d100

3) set the ufs write throttle
*
* ufs_LW = 1/128th of memory
* ufs_HW = 1/64th of memory
*
set ufs_LW=16777216
set ufs_HW=33554432

4) disable fs caching edit /etc/vfstab and add option
nologging,forcedirectio

Here is the init.ora file

db_name = DEFAULT

db_file_multiblock_read_count = 8                                     # SMALL

db_block_buffers = 60                                                 # SMALL

shared_pool_size = 3500000                                            # SMALL

log_checkpoint_interval = 10000

processes = 50                                                        # SMALL

dml_locks = 100                                                       # SMALL

log_buffer = 8192                                                     # SMALL

sequence_cache_entries = 10                                           # SMALL

sequence_cache_hash_buckets = 10                                      # SMALL

max_dump_file_size = 10240      # limit trace file size to 5 Meg each

# Global Naming -- enforce that a dblink has same name as the db it connects to
global_names = TRUE

# Edit and uncomment the following line to provide the suffix that will be
# appended to the db_name parameter (separated with a dot) and stored as the
# global database name when a database is created.  If your site uses
# Internet Domain names for e-mail, then the part of your e-mail address after
# the '@' is a good candidate for this parameter value.

# db_domain = us.acme.com       # global database name is db_name.db_domain

# FOR DEVELOPMENT ONLY, DEFAULT TO SINGLE-PROCESS
# single_process = TRUE

# FOR DEVELOPMENT ONLY, ALWAYS TRY TO USE SYSTEM BACKING STORE
# vms_sga_use_gblpagfil = TRUE

# FOR BETA RELEASE ONLY.  Enable debugging modes.  Note that these can
# adversely affect performance.  On some non-VMS ports the db_block_cache_*
# debugging modes have a severe effect on performance.

_db_block_cache_protect = true                       # memory protect buffers
event = "10210 trace name context forever, level 2" # data block checking
event = "10211 trace name context forever, level 2" # index block checking
event = "10235 trace name context forever, level 1" # memory heap checking
event = "10049 trace name context forever, level 2" # memory protect cursors

# define two control files by default
control_files = (ora_control1, ora_control2)

well after i made my fs changes.

the disks got really busy. 95+% all the time. so i think we could deffinetly use faster disks or more disks.

also oracle 7 and solaris 8 have issues according to oracle helpdesk.

the db has been moved to an NT box for the time being.

Can you expand on the Oracle/Solaris issues?

please define what you want expanded, from what has already been stated.

What are the issues that the Oracle helpdesk mentioned.

have no clue. i didnt talk to them. the oracle dba did.

I looked at the init.ora parameters. I dont think that the init.ora is set optimally. It seems that the DBA is bluffing because these parms are out of the box....

he needs to do some performance tuning on his side...

Jigar
Oracle DBA

Hi there,

I would definately try to turn off all the traces the DBA has turned on. They will kill performance and should only be used when absolutely needed.

Do you still need help?

Regards...Michael

naw. this issue is closed. we ended up useing NT for the solution.

i did all i could (as a sys admin) short of becomeing a DBA.

Have you ever tried to set the ASYNC_WRITE=TRUE and
ASYNC_READ=TRUE parameters in your init.ora file?
On solaris it is supported on both Filesystems and RAW devices.

check it