Poor Performance of server

Hi,

I am new registered user here in this UNIX forums.
I am a new system administrator for AIX 6.1. One of our servers performs poorly every time our application (FINACLE) runs many processes/instances. (see below for topas snapshot)

I use NMON or Topas to monitor the server utilization. I checked the the CPU Idle% and the idle percent is high, however the DISK Busy% is constantly high (during real poor performance, the DISK Busy% is most of the time 100%). Also, I noticed that the FILE/TTY Readch and Writech are constantly high too. See topas snapshot below:

CPU User% Kern% Wait% Idle% Physc Entc
ALL 0.7 0.4 5.5 93.5 0.10 1.7
 
Disk Busy% KBPS TPS KB-Read KB-Writ
Total 100.0 10.8K 219.0 0.0 10.8K
 
FileSystem KBPS TPS KB-Read KB-Writ
Total 6.5K 648.6 3.8K 2.7K
 
Name PID CPU% PgSp Owner
oracle 1311046 0.6 10.6 oracle
vmmd 458766 0.2 1.2 root
aioserve 10682432 0.0 0.4 uatadm2
topas 25755848 0.0 8.9 bankadm
tnslsnr 25493548 0.0 20.4 oracle
oracle 5439982 0.0 14.7 oracle
 
EVENTS/QUEUES FILE/TTY
Cswitch 585 Readch 3915.9K
Syscall 2055 Writech 2759.0K
Reads 630 Rawin 0
Writes 90 Ttyout 1628
Forks 1 Igets 0
Execs 0 Namei 99
Runqueue 1.1 Dirblk 0
Waitqueue 0.5
Memory
PAGING Real,MB 43776
Faults 1639 % Comp 47
Steals 0 % Noncomp 51
PgspIn 0 % Client 51
PgspOut 0
PageIn 0 PAGING SPACE
PageOut 691 Size,MB 12288
Sios 700 % Used 1
% Free 99
NFS (calls/sec)
SerV2 0 WPAR Activ 0
CliV2 0 WPAR Total 0
SerV3 0 Press: "h"-help
CliV3 0 "q"-quit

here's our server specs:

System Model: IBM,8205-E6B
Machine Serial Number: 0678F8P
Processor Type: PowerPC_POWER7
Processor Implementation Mode: POWER 7
Processor Version: PV_7_Compat
Number Of Processors: 6
Processor Clock Speed: 3720 MHz
CPU Type: 64-bit
Kernel Type: 64-bit
LPAR Info: 2 CARD_DB
Memory Size: 43776 MB
Good Memory Size: 43776 MB
Platform Firmware level: AL720_082
Firmware Version: IBM,AL720_082
Console Login: enable
Auto Restart: true
Full Core: false
Network Information
Host Name: CARDDB
IP Address: 10.10.10.100
Sub Netmask: 255.255.255.0
Gateway: 10.10.10.10
Name Server:
Domain Name:
Paging Space Information
Total Paging Space: 12288MB
Percent Used: 1%
Volume Groups Information
==============================================================================
rootvg:
PV_NAME PV STATE TOTAL PPs FREE PPs FREE DISTRIBUTION
hdisk0 active 546 458 109..48..83..109..109
hdisk1 active 546 390 29..60..83..109..109
==============================================================================
oravg:
PV_NAME PV STATE TOTAL PPs FREE PPs FREE DISTRIBUTION
hdisk8 active 4228 68 00..00..00..00..68
==============================================================================
 
 

Everytime this happens, we try to kill processes that is CPU consuming, but still, the DISK Busy% is high. If we reboot the server, the performance becomes okay, but we can't do this during production. Any suggestion on how to optimize this? is it our architecture (having only 1 hard disk for our data)? Does bottle-necking takes place here? What can we do to optimize our server? Any upgrades shall we make? for example increasing physical memory.

Thank you very much. I hope you can help since I am not a UNIX expert.

Killing processes to free resources is not a good idea. You might shoot something you still need.

Yes, from the look of it you have a severe bottleneck with your 1 hdisk. Is this hdisk a physical disk or a LUN from SAN storage?
Do you use asynchronous I/O (AIO) and have it tuned? Oracle will most probably benefit from it as well as getting additional disks.

nmon/topaz has a page that displays AIO stats, I think it was shift + a, not sure though, easy to try it out anyway.

You could post the output of

iostat -A 2 10
# and
vmstat -wt 2 10
# and
lsattr -El aio0

(the 1st 2 commands when there is traffic on your box) and use code tags when doing so, thanks.

and post the filesystem_io_options of oracle + oracle version + something about your disk layout - so are your filesystems setup with min or max distribution, blocksize ...
output of mount command will help and definitely mounting your oracle filesystems with noatime option and if you have a dedicated dump device with rbrw
If you dont want to use SETALL in filesystem_io_options than you might want to consider the filesystems containing oracle data + redologs to be mounted with cio, how many volumegroups with how many disks do you have and similar things
In many cases a hot disk is easily avoidable by changing your filesystems from minimum to maximum distribution and reorganize the volumegroup
I would be in addition interested in vmstat -v and vmstat -s outputs on top of what zaxxon asked for already.
Please gather all data during the time where the system is busy and slow - not during an idle timeframe or the data wont help

Thanks Zaxxon & zxmaus,
I don't know where to begin before this thread opened.

For iostat -A 2 10, vmstat -wt 2 10, vmstat -v and vmstat -s , I will post a snapshot for these once the issue occurs again.

For lsattr -El aio0 , i did't get anything so i tried lsattr -El sys0 (i hope it will do).
--> See attachment - lsattr sys0.jpg

For "Do you use asynchronous I/O (AIO) and have it tuned?"
--> I have no idea for this since I am new here and I came here in the middle of the application roll-out to production. I wish I had a clue. No knowledge on the history of the servers here.
However i checked the I/O stat in nmon and here it is:
-->

Total AIO processes=  72 Actually in use=   0  CPU used=   1.1%
         All time peak=  90     Recent peak=   7      Peak=   3.4%

If physical disk or LUN from SAN
-->Im not entirely sure if it's LUN from SAN but here's what i gathered:
from prtcfg/lsdev:

hdisk8     Available 05-00-00    SAS RAID 10 Disk Array
from lspv hdisk8
PHYSICAL VOLUME:    hdisk8                   VOLUME GROUP:     oravg
PV IDENTIFIER:      00f678f86bb5b458 VG IDENTIFIER     00f678f800004c00000001326bb5b750
PV STATE:           active
STALE PARTITIONS:   0                        ALLOCATABLE:      yes
PP SIZE:            256 megabyte(s)          LOGICAL VOLUMES:  8
TOTAL PPs:          4228 (1082368 megabytes) VG DESCRIPTORS:   2
FREE PPs:           68 (17408 megabytes)     HOT SPARE:        no
USED PPs:           4160 (1064960 megabytes) MAX REQUEST:      256 kilobytes
FREE DISTRIBUTION:  00..00..00..00..68
USED DISTRIBUTION:  846..846..845..845..778
MIRROR POOL:        None

For oracle version:
-->Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bi

For disk layout/fs setup:

-->In sumarry we have three hdisks. rootvg resides in 2 hdisk and applications(oravg) resides in hdisk8.
Below are the details:

lspv
hdisk0          00f678f866fa237c                    rootvg          active
hdisk1          00f678f86b3707b7                    rootvg          active
hdisk8          00f678f86bb5b458                    oravg           active

lsvg -l oravg
oravg:
LV NAME             TYPE       LPs     PPs     PVs  LV STATE      MOUNT POINT
loglv00             jfs2log    1       1       1    open/syncd    N/A
fslv00              jfs2       210     210     1    open/syncd    /u01
fslv01              jfs2       391     391     1    open/syncd    /bankadm
fslv02              jfs2       200     200     1    open/syncd    /smeadm
fslv03              jfs2       200     200     1    open/syncd    /infosys
fslv04              jfs2       1758    1758    1    open/syncd    /uatadm1
fslv05              jfs2       600     600     1    open/syncd    /uatadm2
fslv06              jfs2       800     800     1    open/syncd    /DB_Backups

there are several oracle database instances in oravg also. here they are:

/u01/oracle/oracle/dbs
# ls -lrt *.ora
-rwxr-xr-x    1 oracle   oinstall       8385 Sep 12 1998  init.ora
-rw-r--r--    1 oracle   oinstall      12920 May 03 2001  initdw.ora
-rw-r-----    1 oracle   oinstall        922 Sep 16 09:54 initorcl.ora
-rw-r-----    1 oracle   oinstall       3584 Sep 16 10:15 spfileorcl.ora
-rw-rw-r--    1 oracle   oinstall       5149 Oct 17 02:07 initU1SISDB.ora
-rw-rw-r--    1 oracle   oinstall       5176 Oct 17 02:15 initUASISDB.ora
-rw-rw-r--    1 oracle   oinstall       5172 Feb 05 00:40 initBANKDB.ora
-rw-rw-r--    1 oracle   oinstall       5161 Feb 05 00:40 initSMEDB.ora
-rw-rw-r--    1 oracle   oinstall       5172 Feb 05 00:40 initUAT1DB.ora
-rw-rw-r--    1 oracle   oinstall       5174 Feb 05 00:41 initUAT2DB.ora

for filesystemio_options:
-->I have no idea where to locate this? is this executed or set in a configuration file?

for "...min or max distribution, blocksize ...
output of mount command will help and definitely mounting your oracle filesystems with noatime option and if you have a dedicated dump device with rbrw..."
--> I am totally alost with the min/max tuning. no idea for this yet.

Again. Thanks very much for the help. It's greatly appreciated

if the attachment is not viewable here's the lsattr -El sys0:

SW_dist_intr    false              Enable SW distribution of interrupts              True
autorestart     true               Automatically REBOOT OS after a crash             True
boottype        disk               N/A                                               False
capacity_inc    0.01               Processor capacity increment                      False
capped          true               Partition is capped                               False
conslogin       enable             System Console Login                              False
cpuguard        enable             CPU Guard                                         True
dedicated       false              Partition is dedicated                            False
enhanced_RBAC   true               Enhanced RBAC Mode                                True
ent_capacity    6.00               Entitled processor capacity                       False
frequency       6400000000         System Bus Frequency                              False
fullcore        false              Enable full CORE dump                             True
fwversion       IBM,AL720_082      Firmware version and revision levels              False
ghostdev        0                  Recreate devices in ODM on system change          True
id_to_partition 0X80000B9662900002 Partition ID                                      False
id_to_system    0X80000B9662900000 System ID                                         False
iostat          false              Continuously maintain DISK I/O history            True
keylock         normal             State of system keylock at boot time              False
log_pg_dealloc  true               Log predictive memory page deallocation events    True
max_capacity    12.00              Maximum potential processor capacity              False
max_logname     9                  Maximum login name length at boot time            True
maxbuf          20                 Maximum number of pages in block I/O BUFFER CACHE True
maxmbuf         0                  Maximum Kbytes of real memory allowed for MBUFS   True
maxpout         8193               HIGH water mark for pending write I/Os per file   True
maxuproc        2048               Maximum number of PROCESSES allowed per user      True
min_capacity    3.00               Minimum potential processor capacity              False
minpout         4096               LOW water mark for pending write I/Os per file    True
modelname       IBM,8205-E6B       Machine name                                      False
ncargs          256                ARG/ENV list size in 4K byte blocks               True
nfs4_acl_compat secure             NFS4 ACL Compatibility Mode                       True
pre430core      false              Use pre-430 style CORE dump                       True
pre520tune      disable            Pre-520 tuning compatibility mode                 True
realmem         44826624           Amount of usable physical memory in Kbytes        False
rtasversion     1                  Open Firmware RTAS version                        False
sed_config      select             Stack Execution Disable (SED) Mode                True
systemid        IBM,020678F8P      Hardware system identifier                        False
variable_weight 0                  Variable processor capacity weight                False

From the data you provided so far, you have 1 raidset raid 10 from SAS (so internal storage) disks of a total of 1 TB (presented to the system as 1 disk) for 6 DBs and anything else running on the system excluding root - this just asks for problems as you access all your storage just with one serial path.

Even worse all your filesystems are sharing the same logfile and if I assume correctly and your filesystems are not mounted with noatime option that means that every single read (which includes as simple things as ls) and every single write of 8 different filesystems concur about access to the logfile which by nature makes this logfile naturally the hotspot of the entire system.

Still waiting for the vmstat outputs but I bet that your system has only the default filesystem tuning and is running out of buffers most of the time.

Can you post lvmo -a -v oravg output please to confirm?

Regarding aio - dont worry - on AIX 6.1 you find it with the ioo -a | grep aio command but AIX will turn it on automatically if oracle or any other application wants to use it.

filesystem_io_options is a variable set within oracle (ask your DBA) and can be set to none (standard I think in your oracle version), async or setall - the setall option lets decide oracle to use cio with async IO but wont let you access open database files outside of the database itself other than with rman which might be a problem if you dont do rman backups.

Please run a simple mount on the box to allow us to see if you are using any mount options on the filesystems.

So far

  • consider to give each of your oravg filesystems its very own logfile
  • consider another storage solution and a different filesystem layout if possible since 6 DBs in the same filesystem - even if this filesystem has its own logfile, are still not such a great idea. If that is not possible, than your disk will naturally stay busy since you only have one.

Regards
zxmaus

for

lvmo -a -v oravg
 
vgname = oravg
pv_pbuf_count = 512
total_vg_pbufs = 512
max_vg_pbufs = 16384
pervg_blocked_io_count = 2848
pv_min_pbuf = 512
max_vg_pbuf_count = 0
global_blocked_io_count = 2848
 
for ioo -a | grep aio
 
 
aio_active = 1
aio_maxreqs = 65536
aio_maxservers = 30
aio_minservers = 3
aio_server_inactivity = 300
posix_aio_active = 0
posix_aio_maxreqs = 65536
posix_aio_maxservers = 30
posix_aio_minservers = 3
posix_aio_server_inactivity = 300

For mount:

node mounted mounted over vfs date options
-------- --------------- --------------- ------ ------------ ---------------
/dev/hd4 / jfs2 Feb 02 07:31 rw,log=/dev/hd8
/dev/hd2 /usr jfs2 Feb 02 07:31 rw,log=/dev/hd8
/dev/hd9var /var jfs2 Feb 02 07:31 rw,log=/dev/hd8
/dev/hd3 /tmp jfs2 Feb 02 07:31 rw,log=/dev/hd8
/dev/hd1 /home jfs2 Feb 02 07:31 rw,log=/dev/hd8
/dev/hd11admin /admin jfs2 Feb 02 07:31 rw,log=/dev/hd8
/proc /proc procfs Feb 02 07:31 rw
/dev/hd10opt /opt jfs2 Feb 02 07:31 rw,log=/dev/hd8
/dev/livedump /var/adm/ras/livedump jfs2 Feb 02 07:31 rw,log=/dev/ hd8
/dev/fslv00 /u01 jfs2 Feb 02 07:31 rw,log=/dev/loglv 00
/dev/fslv01 /bankadm jfs2 Feb 02 07:31 rw,log=/dev/loglv 00
/dev/fslv02 /smeadm jfs2 Feb 02 07:31 rw,log=/dev/loglv 00
/dev/fslv03 /infosys jfs2 Feb 02 07:31 rw,log=/dev/loglv 00
/dev/fslv04 /uatadm1 jfs2 Feb 02 07:31 rw,log=/dev/loglv 00
/dev/fslv05 /uatadm2 jfs2 Feb 02 07:31 rw,log=/dev/loglv 00
/dev/fslv06 /DB_Backups jfs2 Feb 02 07:31 rw,log=/dev/loglv 00
/dev/fslv07 /REPORTS jfs2 Feb 02 07:31 rw,log=/dev/hd8

Hi here it is. servers going insane again.

for

iostat -A 2 10
 
System configuration: lcpu=24 drives=4 ent=6.00 paths=3 vdisks=0 maxserver=720
aio: avgc avfc maxgc maxfc maxreqs avg-cpu: % user % sys % idle % iowait physc % entc
     842.8  0.0    18     0     130             5.7   1.1   84.7      8.6   0.6   10.6
Disks:        % tm_act     Kbps      tps    Kb_read   Kb_wrtn
hdisk0           0.0       0.0       0.0          0         0
hdisk1           0.0       0.0       0.0          0         0
hdisk8          90.0     11755.1     873.2         64      6896
cd0              0.0       0.0       0.0          0         0
aio: avgc avfc maxgc maxfc maxreqs avg-cpu: % user % sys % idle % iowait physc % entc
     354.8  0.0    25     0     130             5.7   2.4   90.0      1.8   0.7   12.2
Disks:        % tm_act     Kbps      tps    Kb_read   Kb_wrtn
hdisk0           0.0       0.0       0.0          0         0
hdisk1           0.0       0.0       0.0          0         0
hdisk8          77.0     4971.9     364.1         32      5812
cd0              0.0       0.0       0.0          0         0
aio: avgc avfc maxgc maxfc maxreqs avg-cpu: % user % sys % idle % iowait physc % entc
     40.1  0.0    13     0     130             6.5  34.0   59.1      0.4   2.6   42.8
Disks:        % tm_act     Kbps      tps    Kb_read   Kb_wrtn
hdisk0           0.0       0.0       0.0          0         0
hdisk1           0.0       0.0       0.0          0         0
hdisk8          39.8     514.4      45.0         24      2592
cd0              0.0       0.0       0.0          0         0
aio: avgc avfc maxgc maxfc maxreqs avg-cpu: % user % sys % idle % iowait physc % entc
     182.0  0.0    41     0     130            15.2  61.4   20.7      2.8   5.2   86.8
Disks:        % tm_act     Kbps      tps    Kb_read   Kb_wrtn
hdisk0           0.0       0.0       0.0          0         0
hdisk1           0.0       0.0       0.0          0         0
hdisk8          86.0     3339.4     215.7        120      7992
cd0              0.0       0.0       0.0          0         0
aio: avgc avfc maxgc maxfc maxreqs avg-cpu: % user % sys % idle % iowait physc % entc
     376.2  0.0    46     0     130             8.3   1.5   81.5      8.7   1.0   16.2
Disks:        % tm_act     Kbps      tps    Kb_read   Kb_wrtn
hdisk0           0.0       0.0       0.0          0         0
hdisk1           0.0       0.0       0.0          0         0
hdisk8          87.5     5979.9     361.2        124      9080
cd0              0.0       0.0       0.0          0         0
aio: avgc avfc maxgc maxfc maxreqs avg-cpu: % user % sys % idle % iowait physc % entc
     421.0  0.0    16     0     130             8.0   1.3   83.6      7.1   0.9   14.8
Disks:        % tm_act     Kbps      tps    Kb_read   Kb_wrtn
hdisk0           0.0       0.0       0.0          0         0
hdisk1           0.0       0.0       0.0          0         0
hdisk8          85.5     7500.7     465.6        316      7416
cd0              0.0       0.0       0.0          0         0
aio: avgc avfc maxgc maxfc maxreqs avg-cpu: % user % sys % idle % iowait physc % entc
     733.6  0.0    16     0     130            10.3   2.0   80.2      7.5   1.2   19.6
Disks:        % tm_act     Kbps      tps    Kb_read   Kb_wrtn
hdisk0           0.0       0.0       0.0          0         0
hdisk1           0.0       0.0       0.0          0         0
hdisk8          89.5     12812.5     807.1        372      9216
cd0              0.0       0.0       0.0          0         0
aio: avgc avfc maxgc maxfc maxreqs avg-cpu: % user % sys % idle % iowait physc % entc
     809.0  0.0    14     0     130             9.0   2.5   79.8      8.7   1.1   18.2
Disks:        % tm_act     Kbps      tps    Kb_read   Kb_wrtn
hdisk0           5.5     113.1      29.6          0        84
hdisk1           5.0     113.1      29.6          0        84
hdisk8          99.0     13659.7     1177.8        516      9632
cd0              0.0       0.0       0.0          0         0
aio: avgc avfc maxgc maxfc maxreqs avg-cpu: % user % sys % idle % iowait physc % entc
     870.7  0.0    30     0     130             8.4   1.9   79.8      9.9   1.0   16.5
Disks:        % tm_act     Kbps      tps    Kb_read   Kb_wrtn
hdisk0           2.5     118.7      28.0          0        89
hdisk1           2.5     118.7      28.0          0        89
hdisk8         100.0     13466.7     1176.0        316      9784
cd0              0.0       0.0       0.0          0         0
aio: avgc avfc maxgc maxfc maxreqs avg-cpu: % user % sys % idle % iowait physc % entc
     110.7  0.0    25     0     130             9.2   1.9   87.8      1.2   1.1   17.7
Disks:        % tm_act     Kbps      tps    Kb_read   Kb_wrtn
hdisk0           0.0       0.0       0.0          0         0
hdisk1           0.0       0.0       0.0          0         0
hdisk8          96.0     1717.1     125.7        408      9956
cd0              0.0       0.0       0.0          0         0

for

vmstat -wt 2 10
 
System configuration: lcpu=24 mem=43776MB ent=6.00
 kthr          memory                         page                       faults                 cpu             time
------- --------------------- ------------------------------------ ------------------ ----------------------- --------
  r   b        avm        fre    re    pi    po    fr     sr    cy    in     sy    cs us sy id wa    pc    ec hr mi se
  3   5    5332179     267293     0     0     0     0      0     0   872  67095 12478  6  2 87  5  0.72  12.0 08:17:38
  3   5    5333619     265694     0     0     0     0      0     0   407  36802  8098  6  2 88  4  0.76  12.7 08:17:40
  4   3    5241734     357241     0     0     0     0      0     0   347  24027  4011  5  4 89  2  0.71  11.9 08:17:42
 14   1    5240262     358637     0     0     0     0      0     0    81  12622  1707  7 47 46  1  3.33  55.6 08:17:44
 12   5    5334239     264497     0     0     0     0      0     0   353  58126  8237 13 47 34  5  4.26  71.0 08:17:46
  5   3    5334903     263760     0     0     0     0      0     0   869  56980 14657  7  2 83  8  0.83  13.8 08:17:48
  3   2    5335191     263413     0     0     0     0      0     0   661  51832 12989  5  1 84 10  0.58   9.7 08:17:50
  3   2    5335417     263121     0     0     0     0      0     0     0      0     0  4  1 86  9  0.51   8.5 08:17:52
  2   2    5335725     262748     0     0     0     0      0     0   170  14532  3256  5  1 92  1  0.65  10.8 08:17:54
  1   2    5335968     262442     0     0     0     0      0     0   574  39562  9331  4  2 84 10  0.55   9.2 08:17:56

for

iostat -A 2 10
 
System configuration: lcpu=24 drives=4 ent=6.00 paths=3 vdisks=0 maxserver=720
aio: avgc avfc maxgc maxfc maxreqs avg-cpu: % user % sys % idle % iowait physc % entc
     842.8  0.0    18     0     130             5.7   1.1   84.7      8.6   0.6   10.6
Disks:        % tm_act     Kbps      tps    Kb_read   Kb_wrtn
hdisk0           0.0       0.0       0.0          0         0
hdisk1           0.0       0.0       0.0          0         0
hdisk8          90.0     11755.1     873.2         64      6896
cd0              0.0       0.0       0.0          0         0
aio: avgc avfc maxgc maxfc maxreqs avg-cpu: % user % sys % idle % iowait physc % entc
     354.8  0.0    25     0     130             5.7   2.4   90.0      1.8   0.7   12.2
Disks:        % tm_act     Kbps      tps    Kb_read   Kb_wrtn
hdisk0           0.0       0.0       0.0          0         0
hdisk1           0.0       0.0       0.0          0         0
hdisk8          77.0     4971.9     364.1         32      5812
cd0              0.0       0.0       0.0          0         0
aio: avgc avfc maxgc maxfc maxreqs avg-cpu: % user % sys % idle % iowait physc % entc
     40.1  0.0    13     0     130             6.5  34.0   59.1      0.4   2.6   42.8
Disks:        % tm_act     Kbps      tps    Kb_read   Kb_wrtn
hdisk0           0.0       0.0       0.0          0         0
hdisk1           0.0       0.0       0.0          0         0
hdisk8          39.8     514.4      45.0         24      2592
cd0              0.0       0.0       0.0          0         0
aio: avgc avfc maxgc maxfc maxreqs avg-cpu: % user % sys % idle % iowait physc % entc
     182.0  0.0    41     0     130            15.2  61.4   20.7      2.8   5.2   86.8
Disks:        % tm_act     Kbps      tps    Kb_read   Kb_wrtn
hdisk0           0.0       0.0       0.0          0         0
hdisk1           0.0       0.0       0.0          0         0
hdisk8          86.0     3339.4     215.7        120      7992
cd0              0.0       0.0       0.0          0         0
aio: avgc avfc maxgc maxfc maxreqs avg-cpu: % user % sys % idle % iowait physc % entc
     376.2  0.0    46     0     130             8.3   1.5   81.5      8.7   1.0   16.2
Disks:        % tm_act     Kbps      tps    Kb_read   Kb_wrtn
hdisk0           0.0       0.0       0.0          0         0
hdisk1           0.0       0.0       0.0          0         0
hdisk8          87.5     5979.9     361.2        124      9080
cd0              0.0       0.0       0.0          0         0
aio: avgc avfc maxgc maxfc maxreqs avg-cpu: % user % sys % idle % iowait physc % entc
     421.0  0.0    16     0     130             8.0   1.3   83.6      7.1   0.9   14.8
Disks:        % tm_act     Kbps      tps    Kb_read   Kb_wrtn
hdisk0           0.0       0.0       0.0          0         0
hdisk1           0.0       0.0       0.0          0         0
hdisk8          85.5     7500.7     465.6        316      7416
cd0              0.0       0.0       0.0          0         0
aio: avgc avfc maxgc maxfc maxreqs avg-cpu: % user % sys % idle % iowait physc % entc
     733.6  0.0    16     0     130            10.3   2.0   80.2      7.5   1.2   19.6
Disks:        % tm_act     Kbps      tps    Kb_read   Kb_wrtn
hdisk0           0.0       0.0       0.0          0         0
hdisk1           0.0       0.0       0.0          0         0
hdisk8          89.5     12812.5     807.1        372      9216
cd0              0.0       0.0       0.0          0         0
aio: avgc avfc maxgc maxfc maxreqs avg-cpu: % user % sys % idle % iowait physc % entc
     809.0  0.0    14     0     130             9.0   2.5   79.8      8.7   1.1   18.2
Disks:        % tm_act     Kbps      tps    Kb_read   Kb_wrtn
hdisk0           5.5     113.1      29.6          0        84
hdisk1           5.0     113.1      29.6          0        84
hdisk8          99.0     13659.7     1177.8        516      9632
cd0              0.0       0.0       0.0          0         0
aio: avgc avfc maxgc maxfc maxreqs avg-cpu: % user % sys % idle % iowait physc % entc
     870.7  0.0    30     0     130             8.4   1.9   79.8      9.9   1.0   16.5
Disks:        % tm_act     Kbps      tps    Kb_read   Kb_wrtn
hdisk0           2.5     118.7      28.0          0        89
hdisk1           2.5     118.7      28.0          0        89
hdisk8         100.0     13466.7     1176.0        316      9784
cd0              0.0       0.0       0.0          0         0
aio: avgc avfc maxgc maxfc maxreqs avg-cpu: % user % sys % idle % iowait physc % entc
     110.7  0.0    25     0     130             9.2   1.9   87.8      1.2   1.1   17.7
Disks:        % tm_act     Kbps      tps    Kb_read   Kb_wrtn
hdisk0           0.0       0.0       0.0          0         0
hdisk1           0.0       0.0       0.0          0         0
hdisk8          96.0     1717.1     125.7        408      9956
cd0              0.0       0.0       0.0          0         0

for

vmstat -wt 2 10
 
System configuration: lcpu=24 mem=43776MB ent=6.00
 kthr          memory                         page                       faults                 cpu             time
------- --------------------- ------------------------------------ ------------------ ----------------------- --------
  r   b        avm        fre    re    pi    po    fr     sr    cy    in     sy    cs us sy id wa    pc    ec hr mi se
  3   5    5332179     267293     0     0     0     0      0     0   872  67095 12478  6  2 87  5  0.72  12.0 08:17:38
  3   5    5333619     265694     0     0     0     0      0     0   407  36802  8098  6  2 88  4  0.76  12.7 08:17:40
  4   3    5241734     357241     0     0     0     0      0     0   347  24027  4011  5  4 89  2  0.71  11.9 08:17:42
 14   1    5240262     358637     0     0     0     0      0     0    81  12622  1707  7 47 46  1  3.33  55.6 08:17:44
 12   5    5334239     264497     0     0     0     0      0     0   353  58126  8237 13 47 34  5  4.26  71.0 08:17:46
  5   3    5334903     263760     0     0     0     0      0     0   869  56980 14657  7  2 83  8  0.83  13.8 08:17:48
  3   2    5335191     263413     0     0     0     0      0     0   661  51832 12989  5  1 84 10  0.58   9.7 08:17:50
  3   2    5335417     263121     0     0     0     0      0     0     0      0     0  4  1 86  9  0.51   8.5 08:17:52
  2   2    5335725     262748     0     0     0     0      0     0   170  14532  3256  5  1 92  1  0.65  10.8 08:17:54
  1   2    5335968     262442     0     0     0     0      0     0   574  39562  9331  4  2 84 10  0.55   9.2 08:17:56

for

vmstat -v
 

             11206656 memory pages
             10828768 lruable pages
               248616 free pages
                    3 memory pools
              1343188 pinned pages
                 80.0 maxpin percentage
                  3.0 minperm percentage
                 90.0 maxperm percentage
                 51.4 numperm percentage
              5573666 file pages
                  0.0 compressed percentage
                    0 compressed pages
                 51.4 numclient percentage
                 90.0 maxclient percentage
              5573666 client pages
                    0 remote pageouts scheduled
                    0 pending disk I/Os blocked with no pbuf
                    0 paging space I/Os blocked with no psbuf
                 2484 filesystem I/Os blocked with no fsbuf
                    0 client filesystem I/Os blocked with no fsbuf
                17400 external pager filesystem I/Os blocked with no fsbuf
                 48.0 percentage of memory used for computational pages

for

vmstat -s
 

           8287091950 total address trans. faults
            145986799 page ins
            252876407 page outs
                    0 paging space page ins
                    0 paging space page outs
                    0 total reclaims
           6351948830 zero filled pages faults
            372268119 executable filled pages faults
            308073096 pages examined by clock
                    0 revolutions of the clock hand
            174117315 pages freed by the clock
              9608977 backtracks
              2540378 free frame waits
                    0 extend XPT waits
             11749011 pending I/O waits
            398863545 start I/Os
             59679605 iodones
           9815133595 cpu context switches
             81257292 device interrupts
            823962209 software interrupts
            445828327 decrementer interrupts
                63094 mpc-sent interrupts
                63094 mpc-receive interrupts
               287741 phantom interrupts
                    0 traps
          12314453621 syscalls

I would suggest that you add more buffers to oravg to avoid the blocked IOs:

lvmo �v oravg -p �o pv_pbuf_count=1024

I would suggest as well that you mount your (user-/DB) filesystems with noatime option:
sample:

chfs -a options=noatime /u01

and your backup filesystem additionally with rbrw as you are not likely to need the IOs cahced after the backup's been taken
sample:

chfs -a options=rbrw,noatime /DB_Backups 

... and that you give each of your filesystems its own jfs2log file
sample:

mklv -y loglv06 -t jfs2log oravg 1 ; logform -V jfs2 /dev/loglv06 (answer yes to destroy) ; chfs -a dev=/dev/fslv06 -a log=/dev/loglv06 /DB_Backups 

The filesystem changes require the filesystems to be remounted or alternatively the system to be rebooted

Hi,

Thanks for the suggestions. However, are these commands safe to execute? Might there be problems regarding application processes if I apply these changes?

you should do it during a greenzone since you need to unmount the filesystems what won't be possible when your applications are running

One more note - suggestions might help improve performance but they cannot achieve miracles. Your box really needs to be tuned by hardware as your IO subsystem is not capable to handle the IO on your box. We can make things maybe a little less bad but this is curing symptoms, not fixing the problem.

Regards
zxmaus