Performance issue

Hi,

We have 2 lpars on p6 blade. One of the lpar is having 3 core cpu with 5gb memory running sybase as database. An EOD process takes 25 min. to complete.
Now we have an lpar on P7 server with entitled cpu capacity of 2 with 16 Gb memory and sybase as database. The EOD process which takes 25 min on p6 is taking 50 min on P7 to complete. I have disabled the SMT and checked it reduces to 40 min but still it is higher than p6. :b:
Can anybody help me?

Regards,

VJM

are the databases running on the same storage? you might want to collect nmon data on both systems during the process, and compare them to determine the bottleneck

also check out recommended tunable settings for AIX from the sybase documentation

for more help you need to provide more information

As he says.

What's different from the old p6 environment to the p7? Which modes are you running ie. dedicated or shared LPARs, if shared then capped or uncapped?
Different OS versions?
As funksen said, different disk/storage layout?
Absolutely the same EOD process? I am unsure - I guess End Of Day process? What type of process is that, what is involved on the old hardware, what on the new? Same environment? If SQL is involved, same query as before? Is the database indexed etc. as the one before to avoid table scans etc.?
Had the old environment be tuned already? vmo? ioo? AIO?...

Show us some "vmstat -Iwt 1 20" output from both servers.

how many engines and how many virtuals do you have on your nodes ?

Hi,

There are 2 engines in sybase and 3 vcpu in lpar. the output of vmstat and iostat is as below.

vmstat -Iwt 1 20

System configuration: lcpu=3 mem=16384MB ent=3.00

   kthr            memory                         page                       faults                 cpu             time
----------- --------------------- ------------------------------------ ------------------ ----------------------- --------
  r   b   p        avm        fre    fi    fo    pi    po    fr     sr    in     sy    cs us sy id wa    pc    ec hr mi se
  1   1   0     775841     836806     0   218     0     0     0      0   224  18877  1474  1  2 50 47  0.09   3.2 12:21:04
  1   1   0     776003     836644     0   235     0     0     0      0   232  19767  1606  1  2 51 46  0.10   3.3 12:21:05
  1   1   0     776120     836527     0   173     0     0     0      0   177  14744  1187  1  2 51 47  0.07   2.5 12:21:06
  2   0   0     776249     836398     0   166     0     0     0      0   176  14625  1188  1  2 50 47  0.08   2.7 12:21:07
  1   1   0     776400     836247     0   218     0     0     0      0   214  18040  1479  1  2 51 46  0.09   3.1 12:21:08
  3   0   0     776560     836087     0   235     0     0     0      0   233  19726  1516  1  2 51 46  0.10   3.3 12:21:09
  1   1   0     776713     835934     0   222     0     0     0      0   218  18318  1468  1  2 50 47  0.09   3.1 12:21:10
  1   1   0     776834     835813     0    41     0     0     0      0    43   3779   303  1  2 90  7  0.08   2.8 12:21:11
  1   1   0     776995     835652     0   191     0     0     0      0   187  12386  1384  1  1 51 46  0.08   2.7 12:21:12
  1   1   0     777178     835469     0   221     0     0     0      0   218  15519  1579  1  2 52 45  0.09   3.1 12:21:13
  1   1   0     777311     835336     0   179     0     0     0      0   176  13182  1212  1  1 50 47  0.08   2.5 12:21:14
  1   1   0     777450     835197     0   182     0     0     0      0   184  13990  1278  1  2 50 47  0.08   2.7 12:21:15
  1   1   0     777624     835023     0   227     0     0     0      0   225  17389  1565  1  2 50 47  0.10   3.2 12:21:16
  1   1   0     777775     834872     0    67     0     0     0      0    68   5471   467  1  2 85 12  0.10   3.2 12:21:17
  1   1   0     777908     834739     0   180     0     0     0      0   179  12791  1239  1  1 51 47  0.08   2.5 12:21:18
  1   1   0     778081     834566     0   212     0     0     0      0   215  14603  1528  1  2 51 46  0.09   3.0 12:21:19
  2   0   0     778236     834411     0   200     0     0     0      0   198  14232  1373  1  2 52 46  0.08   2.8 12:21:20
  1   1   0     778367     834280     0    85     0     0     0      0    91   6775   616  1  2 84 14  0.09   2.9 12:21:21
  2   0   0     778514     834133     0   190     0     0     0      0   200  13161  1382  1  2 50 47  0.08   2.7 12:21:22
  1   1   0     778685     833962     0   222     0     0     0      0   218  15568  1481  1  2 50 46  0.10   3.2 12:21:23
=================================================================================================================================
iostat -Dl -T

System configuration: lcpu=3 drives=5 paths=50 vdisks=2

Disks:                     xfers                                read                                write                                  queue                    time
-------------- -------------------------------- ------------------------------------ ------------------------------------ -------------------------------------- ---------
                 %tm    bps   tps  bread  bwrtn   rps    avg    min    max time fail   wps    avg    min    max time fail    avg    min    max   avg   avg  serv
                 act                                    serv   serv   serv outs              serv   serv   serv outs        time   time   time  wqsz  sqsz qfull
hdisk0           0.0 100.2K  11.9 435.4   99.7K   0.0   4.2    0.4  126.6     0    0  11.9   1.1    0.6   99.8     0    0   1.0    0.0  965.6    0.0   0.0   0.3  12:28:31
hdisk1           0.0  80.6K  10.9 490.8   80.1K   0.1   1.5    0.4   82.8     0    0  10.9   1.1    0.5   99.9     0    0   0.2    0.0  107.5    0.0   0.0   0.1  12:28:31
hdisk5           0.0   4.1K   0.8 437.5    3.6K   0.0   2.4    0.4   59.1     0    0   0.8   1.1    0.6   84.4     0    0   0.0    0.0    2.8    0.0   0.0   0.0  12:28:31
hdisk2           0.0 141.7K   1.4   1.1K 140.6K   0.0   3.0    0.1   19.7     0    0   1.4   5.7    0.3  276.1     0    0 189.4    0.0    2.1S   0.0   0.0   0.6  12:28:31
hdisk3           0.0 106.8K   1.1 829.2  106.0K   0.0   3.0    0.1   22.4     0    0   1.1   6.0    0.2   74.7     0    0 154.7    0.0    1.8S   0.0   0.0   0.4  12:28:31

Regards
vjm

---------- Post updated at 12:35 PM ---------- Previous update was at 12:31 PM ----------

Hi,

And the p7 is using xiv and old p6 is using ds8k with svc. The other difference is p7 is having aix 6100-06-06-1140 and p6 is having aix 5300-07-01-0748

Regards,

VJM

What is the CPU speed of your Power6 and your Power7 frame?

I got problems when migrating form P595 power6 5GHz to P770 power7 3GHz.

Power7 id offering more cores but performance per core is lower then Power6
Multithread software got relay speedup, but working Single thread applications were suffering because of per core performance.

What is your microcode level on power7 ? recently there was an update that was solving memory performance issues.

Hi,

P7 = 3.5Ghz and P6 = 4Ghz

P7 = Platform Firmware level: AL730_066

Pls check the below vmstat output is it a cpu bottelneck

vmstat -Iwt 1 20

System configuration: lcpu=3 mem=16384MB ent=3.00

   kthr            memory                         page                       faults                 cpu             time
----------- --------------------- ------------------------------------ ------------------ ----------------------- --------
  r   b   p        avm        fre    fi    fo    pi    po    fr     sr    in     sy    cs us sy id wa    pc    ec hr mi se
  0   2   0     991825     620753     0   309     0     0     0      0   321  11013  1892  1  6 53 40  0.22   7.3 13:17:05
  0   2   0     991827     620751     0   307     0     0     0      0   297   9111  1892  5  6 52 37  0.32  10.6 13:17:06
  1   2   0     991827     620751     0   342     0     0     0      0   349   8433 10162  5  8 55 33  0.37  12.4 13:17:07
  0   3   0     991842     620736     0   292     0     0     0      0   298   3243  1644  0 11 54 35  0.34  11.3 13:17:08
  3   4   0     991863     620715     0   328     0     0     0      0   330   3111  2005  7  8 55 30  0.48  16.0 13:17:09
  3   0   0     991865     620713     0    99     0     0     0      0   100    692   509 12  7 77  4  0.58  19.3 13:17:10
  1   5   0     991866     620712     0   416     0     0     0      0   418   4586  2201  0 13 45 41  0.41  13.6 13:17:11
  0   3   0     991866     620712     0   411     0     0     0      0   410   3701  1962  0 12 48 39  0.37  12.3 13:17:12
  0   3   0     991866     620712     0   406     0     0     0      0   407   3440  1791  0 12 52 36  0.37  12.2 13:17:13
  0   3   0     991866     620712     0   430     0     0     0      0   426   3552  2245  0  9 52 38  0.29   9.8 13:17:14
  1   3   0     991866     620712     0   402     0     0     0      0   403   3772  2117  0 10 53 37  0.30  10.1 13:17:15
  1   2   0     991866     620712     0   423     0     0     0      0   421   3384  2043  0 11 57 32  0.34  11.2 13:17:16
  0   5   0     991866     620711     0   408     0     0     0      0   411   5384  2038  0 10 53 36  0.31  10.4 13:17:17
 11   3   0     991866     620711     0   391     0     0     0      0   395   3662  2167  0 10 52 38  0.31  10.3 13:17:18
  0   6   0     991866     620711     0   412     0     0     0      0   413   4032  2066  0 11 52 38  0.33  10.9 13:17:19
  0   3   0     991866     620711     0   404     0     0     0      0   407   3277  1967  0 10 53 38  0.30   9.9 13:17:20
  2   5   0     991866     620711     0   431     0     0     0      0   446   4450  2007  0 12 53 34  0.38  12.7 13:17:21
  1   4   0     991866     620711     0   103     0     0     0      0   106    936   545  0  9 85  6  0.28   9.3 13:17:22
  0   4   0     991867     620710     0   417     0     0     0      0   426   5193  2241  1 11 51 38  0.35  11.6 13:17:23
  0   5   0     991867     620710     0   436     0     0     0      0   431   3480  2242  0  9 54 36  0.29   9.6 13:17:24

---------- Post updated at 01:29 PM ---------- Previous update was at 01:26 PM ----------

We are having sybase database and running end of the day process.
How to find out if the application is single or multi threaded.

Somehow you ignored my questions more or less... anyway, good luck.

Check the column "Mthrd" for Y or N:

svmon -P| grep -p Pid

your IO subsystem seems to be causing your issues. Did you setup your logical volumes (filesystems, raws) with max or minimum distribution? How big are your disks? What is the output of vmstat -v and vmstat -s. What is the queue depth on your disks and if this is vio storage, on the vio servers.

Hi,

What do you mean by logical volumes (filesystems, raws) with max or minimum distribution? Disks are of 100Gb x 2. queue_depth is 40 and this is XIV storage.

#vmstat -v
              4194304 memory pages
              3986502 lruable pages
               574924 free pages
                    1 memory pools
               413979 pinned pages
                 95.0 maxpin percentage
                  3.0 minperm percentage
                 90.0 maxperm percentage
                 64.2 numperm percentage
              2561112 file pages
                  0.0 compressed percentage
                    0 compressed pages
                 64.2 numclient percentage
                 90.0 maxclient percentage
              2561112 client pages
                    0 remote pageouts scheduled
                    0 pending disk I/Os blocked with no pbuf
                    0 paging space I/Os blocked with no psbuf
                 2228 filesystem I/Os blocked with no fsbuf
                    8 client filesystem I/Os blocked with no fsbuf
                38788 external pager filesystem I/Os blocked with no fsbuf
                 25.2 percentage of memory used for computational pages
#vmstat -s
              2911826 total address trans. faults
               857620 page ins
              6318673 page outs
                    0 paging space page ins
                    0 paging space page outs
                    0 total reclaims
              1720594 zero filled pages faults
                35814 executable filled pages faults
                    0 pages examined by clock
                    0 revolutions of the clock hand
                    0 pages freed by the clock
               248183 backtracks
                    0 free frame waits
                    0 extend XPT waits
               110954 pending I/O waits
              7175994 start I/Os
              2749907 iodones
             22266453 cpu context switches
              2734773 device interrupts
               289681 software interrupts
              2108993 decrementer interrupts
                  371 mpc-sent interrupts
                  371 mpc-receive interrupts
                35496 phantom interrupts
                    0 traps
             92113814 syscalls

Regards,

VJM

I mean the interpolicy which you usually define during creation of your logical volumes / rawdevices. If its set to minimum - and you have only a few huge disks - you speak to your data in a serial fashion - which is usually a quite bad idea for databases with the only exception of sybase IQ and oracle asm which handle the data distribution internally.

You still did not answer if you run sybase in filesystems or rawdevices. Generally - and specifically tempdb as this is the most used part of your sybase DB. From the amount of numperm you are using, you are utilizing most of your memory for filecaching so I would guess it's filesystems - or you are having otherwise plenty of non-raw IO which is buffered though it probably doesnt have to be. You might want to consider moving your tempdb's into RAM disk after having remediated the reason for hogging so much non-comp memory.

You might want to consider as well to set j2_dynamicBufferPreallocation=128 or 256. And ... what is your network size setting in your sybase version / which sybase version do you actually run. And is it the same between p6 and p7 - or did you maybe upgrade from 12 to 15 - in which case any stored procedures you might have can cause your issues.

Few more questions:
Disks are assigned directly (FC adapters) or through VIO?
Those are 2 disks in mirror?
What is the pp and block size on you VG/filesystems.

But I agree with zxmous it does not looks like memory or cpu issue then only thing what left is storage.
Maybe you just got unlucky and got disks from pool shared with other heavy used systems.
Have you been trying to compare db IO stats a specially storage IO response time ?

Hi,

The interpolicy is minimum. The sybase db and tempdb are on filesystems.
The sybase version is 12.5.2 on both live and new environment.
On P7 disk are from XIV directly attached to server and not from VIO. On old P6 the disk are from DS8k via SVC.

To zaxxon
The P7 lpars has dedicated processors same as P6, the only difference is P6 is 4Ghz and P7 s 3.5Ghz. The EOD process is absolutely same. As per the dba the database is indexed. The old environment is tuned and is on 5.3. As per the IBM documentation aix 6.1 is already tuned.

Yesterday I have disabled the multi threading and the time taken is reduced to 37 min. But its much more than live which is 25 min.

As zxmaus has suggested need to check by creating raw disk for temp db. I will check and revert.

I am posting the latest iostat and vmstat

#iostat -Dl -T

System configuration: lcpu=3 drives=3 paths=48 vdisks=2

Disks:                     xfers                                read                                write                                  queue                    time
-------------- -------------------------------- ------------------------------------ ------------------------------------ -------------------------------------- ---------
                 %tm    bps   tps  bread  bwrtn   rps    avg    min    max time fail   wps    avg    min    max time fail    avg    min    max   avg   avg  serv
                 act                                    serv   serv   serv outs              serv   serv   serv outs        time   time   time  wqsz  sqsz qfull
hdisk1           0.1 186.9K  22.0  24.3K 162.6K   0.5   3.6    0.4   51.0     0    0  21.5   1.0    0.6   99.5     0    0   0.1    0.0  173.7    0.0   0.0   0.1  12:08:58
hdisk0           0.1 210.9K  24.8   1.1K 209.9K   0.2   4.7    0.4   44.4     0    0  24.6   1.2    0.6  116.2     0    0   1.1    0.0   31.1S   0.0   0.0   0.6  12:08:58
hdisk5           0.0   4.9K   0.8   2.1K   2.7K   0.2   2.0    0.4   60.6     0    0   0.6   1.1    0.6   80.0     0    0   0.0    0.0    3.2    0.0   0.0   0.0  12:08:58
vmstat -Iwt 1 20

System configuration: lcpu=3 mem=16384MB ent=3.00

   kthr            memory                         page                       faults                 cpu             time
----------- --------------------- ------------------------------------ ------------------ ----------------------- --------
  r   b   p        avm        fre    fi    fo    pi    po    fr     sr    in     sy    cs us sy id wa    pc    ec hr mi se
  0   1   0    1028538     585864     0   801     0     0     0      0   790  59329  5278  5  6 40 50  0.33  11.1 12:09:44
  0   1   0    1028541     585861     0   799     0     0     0      0   786  59183  5321  5  6 39 50  0.32  10.7 12:09:45
  0   1   0    1028542     585860     0   820     0     0     0      0   808  61256  5455  5  6 36 54  0.33  10.9 12:09:46
  0   1   0    1028544     585858     0   798     0     0     0      0   789  61245  5285  4  6 39 51  0.32  10.8 12:09:47
  0   1   0    1028545     585857     0   802     0     0     0      0   793  60842  5307  5  6 41 48  0.34  11.2 12:09:48
  0   1   0    1028546     585856     0   672     0     0     0      0   668  50270  4503  4  5 40 51  0.28   9.2 12:09:49
  1   0   0    1028547     585855     0   694     0     0     0      0   689  53163  4611  4  5 40 51  0.28   9.5 12:09:50
  1   1   0    1028549     585853     0   777     0     0     0      0   772  60547  5146  4  6 37 52  0.33  10.9 12:09:51
  1   0   0    1028550     585852     0   759     0     0     0      0   750  56675  4983  4  5 40 50  0.31  10.2 12:09:52
  0   1   0    1028552     585850     0   777     0     0     0      0   763  58379  4973  4  6 37 53  0.32  10.5 12:09:53
  1   1   0    1028553     585849     0   712     0     0     0      0   705  53933  4822  4  5 36 54  0.29   9.8 12:09:54
  0   1   0    1028554     585848     0   729     0     0     0      0   719  55416  4809  4  5 39 52  0.30  10.0 12:09:55
  0   1   0    1028556     585846     0   849     0     0     0      0   835  61534  5513  5  6 40 49  0.34  11.4 12:09:56
  1   1   0    1028557     585845     0   905     0     0     0      0   893  68578  5922  5  7 38 50  0.37  12.3 12:09:57
  1   1   0    1028561     585841     0   735     0     0     0      0   730  58778  5250  8  6 40 47  0.42  13.8 12:09:58
  1   1   0    1028562     585840     0   939     0     0     0      0   921  12491  6995  4  5 40 52  0.28   9.3 12:09:59
  2   0   0    1028562     585840     0     2     0     0     0      0    21 176114 60259 19 25 57  0  1.42  47.4 12:10:00
  2   0   0    1028562     585840     0     1     0     0     0      0     7 238561 86684 12 34 54  0  1.54  51.4 12:10:01
  3   0   0    1028563     585839     0   204     0     0     0      0   205 214269 70577 10 29 51  9  1.32  43.9 12:10:02
  1   1   0    1028563     585838     1   385     0     0     0      0   295   4683  2530 23  2 56 19  0.75  25.1 12:10:03

Regards,

VJM

---------- Post updated at 12:40 PM ---------- Previous update was at 12:14 PM ----------

The below are the sybase settings.

[Named Cache:abwslive_data_cache]
	cache size = 750M
	cache status = mixed cache
	cache replacement policy = DEFAULT
	local cache partition number = DEFAULT
[16K I/O Buffer Pool]
	pool size = 100.0000M
	wash size = DEFAULT
	local async prefetch limit = DEFAULT
[4K I/O Buffer Pool]
	pool size = 360.0000M
	wash size = DEFAULT
	local async prefetch limit = DEFAULT
[Named Cache:default data cache]
	cache size = 800M
	cache status = default data cache
	cache replacement policy = DEFAULT
	local cache partition number = DEFAULT
[16K I/O Buffer Pool]
	pool size = 100.0000M
	wash size = DEFAULT
	local async prefetch limit = DEFAULT
[Meta-Data Caches]
	number of open databases = DEFAULT
	number of open objects = 4000
	open object spinlock ratio = DEFAULT
	number of open indexes = 700
	open index hash spinlock ratio = DEFAULT
	open index spinlock ratio = DEFAULT
	partition groups = DEFAULT
	partition spinlock ratio = DEFAULT
[Disk I/O]
	disk i/o structures = 600
	number of large i/o buffers = DEFAULT
	page utilization percent = DEFAULT
	number of devices = 30
	disable disk mirroring = DEFAULT
	allow sql server async i/o = DEFAULT
[SQL Server Administration]
	procedure cache size = 107520
        runnable process search count = 100
        number of aux scan descriptors = 1000
[User Environment]
	number of user connections = 500
	stack size = DEFAULT
	stack guard size = DEFAULT
	permission cache entries = 40
	user log cache size = 2560

Rest is default including network.

Hi,

I have resolved this issue by moving my sybase Db to DS8k. Anyways thanks to all of you.

Regards,

VJM

Have you been able to get some relevant data from database like db IO response time ?
Some time ago we were thinking what to chose Storwise or XIV.
I am really curies what is you Storage Guy/IBM Support is having to say why XIV is slower.

Hi,

I am neither a storage guy nor a dba. While searching for this issue I somewhere read that XIV is a bit slower, and my live setup is on DS8K and this new one is on XIV. So I have taken a lun from my storage guy and done the testing. When on XIV my database disk was 100% while writing. So that's it.

Regards,

VJM

Would be interesting if you could run ndisk64 with similar parameters against both disks. developerWorks: Wikis - Systems - nstress
It can give you and us some numbers, showing the difference in IOS/throughput between both technologies...