type of pages being paged in/out

Hello people

i see that our system is paging and paging space usage grows, a the same time i see that usage of Comp memory grows and Client memory decreases.

# lsps -s
Total Paging Space   Percent Used
      65792MB              33%

MEMORY
Real,MB   81920
% Comp     69
% Noncomp  30
% Client   30

Can i assume that once balance changes in Comp and Noncomp memory usage, and paging space increases as well it means that Client(Noncomp) pages were paged out to paging space?

The main question is: How to identify what type of pages (Computational or Client) were paged out or paged in? Where can i see such statistiscs in AIX?

Thank you for the responses.

Overview of AIX page replacement covers monitoring, perhaps you can compare totals in real time to see what sort of activity is happening, if there is no ready built tool.

there are values of computational and non-computational memory in svmon output, but it shows only what is in real memory, and not what is in paging space...
i'm more interested in the page out rate of computational and non-computational pages.

if i see that Non-comp memory decreased, then it was paged out to a file system and not to to a paging space itself, if i'm not mistaken, but this is still a page out activity. then why paging space usage is growing? is it filled with computational pages only?

Can we assume this is an untuned box running AIX 5.3 ? If so set lru_file_repage to 0 and the paging will dramatically decrease. There are a bunch of threads here on what else you can do ... if your box is properly tuned, than most likely you are paging non-comp but to me it doesn't look like.

Regards
zxmaus

it's AIX 6.1 lru_file_repage is set to 0, minperm% = 5, maxperm% =95. maxlcient% = 95

so it's instructed to page out only non-computational pages.

IBM suggested to increase minfree and maxfree settings to be lrud more active in freeing pages.

zoom,

can you show us your vmstat -Iwt 2 10 output from a real busy time of your system ?

Your box should not page at all as far as I can see - it rather should scan and free pages when your free list is very small. I had a similar problem just a few weeks ago with one of my AIX 6.1 boxes (and IBM could not see any problem at first). Like you I had about 25 GB memory non-comp memory what is for most systems more than sufficient - but my box was scanning and freeing itself to death. Being escalated to 3rd level because I insisted of having a problem, they provided me an efix that solved this problem immediately and permanently as this strange 'paging even though there is sufficient memory + scanning / freeing excessively' is a known bug. So my question would be how high is your scan to free ratio right now ?

Maybe it is similar on your box? They gave me PTF U837435

Regards
zxmaus

we have 80Gb of real memory. if i did not mention, we have Oracle DB on the system, and as i know usually it's hungry for computational memory.

i have saved vmstat output from 2 days ago, when system was paging, but without I flag.

any suggestions on the stats below?

# vmstat -wt 1 50

System configuration: lcpu=12 mem=81920MB ent=4.15

 kthr   memory                 page                       faults                 cpu             time
-------  ------------------------------------ ------------------ ----------------------- --------
  r   b        avm        fre    re    pi    po    fr     sr    cy    in     sy    cs us sy id wa    pc    ec hr mi se
 50  32   19586241      10654     0   899     0 30407  42956     0 12452  84930 24211 86 14  0  0  5.99 144.3 13:56:42
 57  24   19574642      11469     0   395     0 18630  25800     0 10805  87318 20980 87 13  0  0  5.99 144.4 13:56:43
  0   0   19573830      11694     0   323     0 22416  72915     1  9943  81807 23153 87 13  0  0  6.00 144.5 13:56:44
 55  31   19580318      10492     0   890     0 29191  41214     0 10861 103327 21363 87 13  0  0  5.98 144.1 13:56:45
 53  22   19589125       9669     0   493  3472 32245  51115     0 11359  85225 21000 87 13  0  0  5.94 143.1 13:56:46
 39  45   19579715      10950     0   585  4776  8347  83604     0  8291  96653 13998 90 10  0  0  6.01 144.8 13:56:47
 43  34   19576204      11290     0  1059     0 22258  61472     0 11956 161918 20279 86 14  0  0  5.99 144.4 13:56:48
 55  23   19582398      10613     0  1224     0 30898  40402     0 12282 108149 19988 85 15  0  0  5.99 144.3 13:56:49
 43  39   19596937       9665     0   525  6646 31487  70471     0  8198 116227 15955 86 14  0  0  5.99 144.2 13:56:50
 56  36   19602851      11116     0   547  9091 21804  92776     0  8565 122928 14486 87 13  0  0  5.99 144.4 13:56:51
 55  33   19612355      11776     0  1205  2401 32602  67571     0  9969 101132 19536 87 13  0  0  5.99 144.4 13:56:52
 59  27   19612958       9951     0  1342  3391 23245  28008     0 11897 120336 22279 87 13  0  0  6.01 144.7 13:56:53
 64  19   19612442      11375     0   632  8739 23997  54422     0 12854 126162 23925 84 16  0  0  5.99 144.3 13:56:54
 53  35   19628881      10715     0   901  6874 24036 117416     0  6716 115341 15583 86 14  0  0  5.98 144.0 13:56:55
  0   0   19617723      15333     0  1039   214  7497   7527     0  7946 332319 71947 90 10  0  0  6.01 144.8 13:56:56
100  16   19628065      10639     0   963     0 32977  39197     0 10642 317848 123714 79 21  0  0  6.00 144.5 13:56:57
  0   0   19604326      27589     0  1016  2942 14517  17680     0  9357 119917 16855 60 40  0  0  6.01 144.7 13:56:58
 59  46   19601496      11339     0   976     0  6554  26965     0 11104 129211 20731 87 13  0  0  6.00 144.5 13:56:59
 65  31   19600216      10691     0   845     0 21576  64180     1 11287 106020 21053 87 13  0  0  5.97 144.0 13:57:00
 67  21   19599421      10664     0  1439     0 28527  35218     0 12221  91819 22345 86 14  0  0  5.96 143.6 13:57:01
 kthr   memory                 page                       faults                 cpu             time
-------  ------------------------------------ ------------------ ----------------------- --------
  r   b        avm        fre    re    pi    po    fr     sr    cy    in     sy    cs us sy id wa    pc    ec hr mi se
 58  21   19599235      10927     0   968     0 22100  22986     0 10640 154316 19789 87 13  0  0  5.99 144.4 13:57:02
 60  17   19612455      10859     0  1352     0 31795  73250     0  9676 105401 20180 86 14  0  0  5.94 143.1 13:57:03
 67  18   19597492      13340     0  1467  1576  8890  28669     1 11034  98093 17871 88 12  0  0  5.98 144.1 13:57:04
 54  23   19601329      10550     0   746     0 21498  23955     0 10921 107948 19137 88 12  0  0  5.99 144.3 13:57:05
 66  27   19589092      12604     0   921     0  8567  10177     0 10294 127234 17988 89 11  0  0  5.99 144.4 13:57:06
 55  19   19590418      12040     0  1471     0 21869  22702     0 10084  93156 17766 90 10  0  0  5.99 144.3 13:57:07
 61  17   19589893      10505     0   518     0 19931  69584     0  8191 115953 22017 89 11  0  0  6.00 144.5 13:57:08
 66  23   19581600      12367     0   372     0 14610  27604     1  8092 133452 35738 89 11  0  0  5.99 144.4 13:57:09
  0   0   19576270      14603     0   357     0 24366  28005     0  8279  92462 17792 89 11  0  0  6.00 144.5 13:57:10
 53  21   19574321      10916     0   245     0 11368  11616     0  8170  98819 14636 91  9  0  0  5.99 144.4 13:57:11
 79  87   19580973      10435     0   155     0 24973  43934     0  7599  80480 19025 90 10  0  0  5.99 144.3 13:57:12
 56  30   19585494      10905     0   902     0 26829 103304     1 10790  98629 24273 86 14  0  0  6.00 144.5 13:57:13
 56  23   19589050      10514     0   486     0 22266  55616     0  9210 105709 20334 89 11  0  0  5.99 144.4 13:57:14
 50 101   19587932      12694     0   353     0 23691  52835     0  9576  91472 22026 88 12  0  0  5.98 144.1 13:57:15
 52  32   19592414      11794     0  1049  4005 24242  83626     0 10101 101352 18770 88 12  0  0  6.00 144.5 13:57:16
 59  16   19607402      10611     0  1636  1455 35453  71842     1 11907  93990 21436 86 14  0  0  6.00 144.6 13:57:17
 54  19   19601814      11293     0  1644     0 16944  17551     0 10624 107926 21901 88 12  0  0  5.99 144.4 13:57:18
 55  22   19617178       9763     0   527  6028 38959  52664     0 12361 163764 30047 85 15  0  0  6.02 144.9 13:57:19
 59  20   19624308      12069     0  1059 11011 29785  58740     0  9919 121105 20893 85 15  0  0  5.96 143.7 13:57:20
 61  15   19606622      10739     0  1999     0 13636  40117     1 10881 157683 20670 86 14  0  0  6.00 144.6 13:57:21
 kthr   memory                 page                       faults                 cpu             time
-------  ------------------------------------ ------------------ ----------------------- --------
  r   b        avm        fre    re    pi    po    fr     sr    cy    in     sy    cs us sy id wa    pc    ec hr mi se
 50  20   19614515      13718     0   963     0 35873  36624     0 11771 138131 20688 84 16  0  0  5.99 144.3 13:57:22
 50  19   19611292      11685     0  1044     0 16801  17505     0 11865 101876 19603 89 11  0  0  5.98 144.2 13:57:23
 64  19   19608099      10734     0  1499     0 22949  68062     0 10512  93798 19521 87 13  0  0  6.00 144.6 13:57:24
  0   0   19618775      10522     0  1399     0 32650  37033     1 10516 105945 21526 85 15  0  0  5.99 144.3 13:57:25
 53  17   19610636      14427     0  1152     0 15200  15761     0  8560 116186 16393 86 14  0  0  6.00 144.6 13:57:26
 54  19   19613128      11520     0   959  1892 12575  14683     0  7575  76820 12665 91  9  0  0  5.99 144.4 13:57:27
 49  16   19620876      11118     0   482  7708 26408  72068     0 10772 163651 19375 82 18  0  0  6.00 144.5 13:57:28
 54  19   19611742      13191     0   831   141 15885  21959     1 10009 125374 16798 83 17  0  0  5.98 144.1 13:57:29
 53  13   19604376      13235     0   748     0 15730  24210     0  8667 100917 16720 83 17  0  0  6.00 144.7 13:57:30
 51  19   19596551      14312     0   613     0 12341  12694     0  9243  98722 15107 86 14  0  0  5.98 144.1 13:57:31
# svmon -G
               size       inuse        free         pin     virtual   mmode
memory     20971520    20895377       10607     4564324    19496715     Ded
pg space   16842752     5667782

               work        pers        clnt       other
pin         3952074           0           0      612250
in use     14447994           5     6447378

PageSize   PoolSize       inuse        pgsp         pin     virtual
s    4 KB         -    10584481     5480166      811284     9154139
m   64 KB         -      644431       11726      234565      646411
# vmstat -v
             20971520 memory pages
             20286310 lruable pages
                10525 free pages
                    4 memory pools
              4631080 pinned pages
                 80.0 maxpin percentage
                  5.0 minperm percentage
                 95.0 maxperm percentage
                 31.3 numperm percentage
              6357498 file pages
                  0.0 compressed percentage
                    0 compressed pages
                 31.3 numclient percentage
                 95.0 maxclient percentage
              6357493 client pages
                    0 remote pageouts scheduled
              1167074 pending disk I/Os blocked with no pbuf
               600159 paging space I/Os blocked with no psbuf
                 2484 filesystem I/Os blocked with no fsbuf
                  443 client filesystem I/Os blocked with no fsbuf
            249209795 external pager filesystem I/Os blocked with no fsbuf
# vmo -a
     ame_cpus_per_pool = 8
       ame_maxfree_mem = 0
   ame_min_ucpool_size = 0
       ame_minfree_mem = 0
       ams_loan_policy = n/a
   force_relalias_lite = 0
     kernel_heap_psize = 65536
          lgpg_regions = 0
             lgpg_size = 0
       low_ps_handling = 1
               maxfree = 1088
               maxperm = 19271992
                maxpin = 16912783
               maxpin% = 80
         memory_frames = 20971520
         memplace_data = 2
  memplace_mapped_file = 2
memplace_shm_anonymous = 2
    memplace_shm_named = 2
        memplace_stack = 2
         memplace_text = 2
memplace_unmapped_file = 2
               minfree = 960
               minperm = 1014314
              minperm% = 5
             nokilluid = 0
               npskill = 131584
               npswarn = 526336
             numpsblks = 16842752
       pinnable_frames = 16306688
   relalias_percentage = 0
                 scrub = 0
              v_pinshm = 0
      vmm_default_pspa = 0
    wlm_memlimit_nonpg = 1

Hi,

from this output, I would only recommend to add 20 GB memory and your paging will stop. From your stats you are using close to 100% memory computational - no wonder that your box is paging - and yes this will for sure be DB content as well ... and if your DBAs are doing rman backups on top it becomes real bad.

What you could try is mount your oracle filesystems with noatime option and switch oracle to setall - this will give you more free memory into your free list and may reduce your memory footprint.

Regards
zxmaus

zxmaus, thanks for your suggestions.

how did you calculate that system needs 20Gb?

i found in one IBM book, that memory needs are virtual + pers + clnt from svmon output.

so the deficit was 19496715 + 5 + 6447378 - 20971520 = 4972578 * 4Kb = 19890312 Kb = 19Gb

are these calculations correct?

# svmon -G
               size       inuse        free         pin     virtual   mmode
memory     20971520    20895377       10607     4564324    19496715     Ded
pg space   16842752     5667782

               work        pers        clnt       other
pin         3952074           0           0      612250
in use     14447994           5     6447378

PageSize   PoolSize       inuse        pgsp         pin     virtual
s    4 KB         -    10584481     5480166      811284     9154139
m   64 KB         -      644431       11726      234565      646411

i don't understand yet what is "virtual" represent in svmon. the value 19496715 is greater than paging space and lower than real memory. can you explain this?
i can show you some visual graphs captured by nmon for the last 3 days. we are interested in the time from 8 to 17 o'clock

this from last Tuesday when system was paging much

this is from Wednesday, the system paged out only until 12, and right after that the percentage of computational processes memory increased to 80%

this is from Thursday, when the system almos did not page from 08 till 17 and as you see all this time computational memory was on the level of 75%-80%

it seems to me that system is comfortable when computational memory consumes 80% of the total memory, which is 80Gb. 20% is used by the system, so application needs are 60% of 80Gb which is 48Gb.

I have a few hundred oracle boxes - in my experience the systems are most comfortable when comp (the avm value in vmstat x 4k) doesnt exceed 80% as this leaves enough memory for all the oracle forked processes, IO buffering, batch processing and so on.
When my memory utilization exceeds these 80% than my system starts scanning / freeing memory which utilizes cpu and slows down the DB as the system waits for sufficient freed up memory to continue processing - which obviously is bad. The higher the scan to free ratio - so the more pages need to be scanned to free up the memory I actually need for the given workload - the slower the system gets and the more cpu is utilized. So I make sure I always have plenty of memory - as particularly for oracle the need of non-comp memory is very valid as its usually a filesystem based DB - and not finding filecache if needed slows down the DB too as no IO can happen ...
Please note - during rman backups you still will see some scan and free as this puts - at least in my environments - a large amount of additional load onto the systems. So my 80% are during busy times but not when rman runs. Nmon is pretty helpful to find out what is good for your system and when you do have your busy times.

Virtual memory btw is physical memory + pagingspace in 4k pages. Virtual memory in use is how much of this you are actively using - ideally visibly less than you physically have :slight_smile:

Regards
zxmaus

Isn't this almost 25GB FS data in physical memory while having >20GB in the paging?
Reducing (forcing down) FS caching would be one way to go in my opinion, why an Oracle box needs to cache 25GB FS data?

oh many reasons:

  • because oracle is a filesystem + process based DB
  • because each oracle forked process needs memory - I assume this server has thousands of processes
  • because every batch / backup / DB load requires lots of file caching on top of what is in the SGA - each IO will use 1 page (4 k) of memory

On thing that can run paging up unnecessarily is static linking of apps, something you can change for apps developed locally. All apps using libs dynamically linked are using the same pages, not copies, and since those pages are more frequently referenced, they stay in RAM.

We used to get a lot of RDBMS out of a small platform by designing batch processes to process N records at a time and then commit. We also found that over-use of updatable cursors increased processing. The way Oracle works, a long select can end up owning many pages as other processes update or delete those rows. So, it helps the whole system to do things in small batches, and even in select programs, a commit may release pages tied up by update-capable cursors. If you think of it, even processing 128 records per commit, you have 99+% of any economy of scale over one at a time. Any locks are released sooner, so interactive can get access. As batches get smaller, working set pages are in RAM or CPU cache more often, and finished pages can roll out and not soon return. Smaller batches also are more likely not to overwhelm cachng and buffering in disk subsystems, slowing I/O to media speed. Also, the system tuning does not change on more active days, just the batch run time.

Interactive row sets tend to be small, but batch can bring a lot of pages, an unpredicatable number, into play at once.

2 Likes

Thanks for your advices, guys.

I don't have direct access to Oracle DB and don't know what it's doing. DB is managed by our client.

At this moment system is not paging out, but there are some issues anyway. I see many waits in the queue and some blocked I/O operations.
See below.

# vmstat -Iwt 1 20

System configuration: lcpu=12 mem=81920MB ent=4.15

   kthr     memory                 page                       faults                 cpu             time
-----------  ------------------------------------ ------------------ ----------------------- --------
  r   b   p        avm        fre    fi    fo    pi    po    fr     sr    in     sy    cs us sy id wa    pc    ec hr mi se
 20   8   0   19073983      15268  9364  1896     0     0  8521  24873  3872  49377 18149 79 21  0  1  5.42 130.5 11:17:19
  7  16   0   19065828      12586 10785  1191     5     0  1129   2890  3847  37785 15754 91  9  0  1  4.91 118.4 11:17:20
  5  11   0   19066437      11007  5477   147    31     0  4516  10454  3337  39842 13525 92  7  0  1  4.70 113.3 11:17:21
  4  15   0   19066152      10584  7502  2537    35     0  9303  19316  4053  23820 20048 89 10  0  1  4.67 112.5 11:17:22
  7  13   0   19069319      10611  9314  1618     1     0 13935  35082  3913  30277 23417 87 11  0  1  4.73 113.9 11:17:23
  7   9   0   19066310      14833  9568    79    82     0 10966  19981  4539  21673 20493 88 10  0  1  4.51 108.6 11:17:24
  5  11   0   19051653      20545 10865  2252   154     0  4197   5532  5330  26891 21889 86 13  0  1  4.60 110.9 11:17:25
  7  12   0   19052092      11591 11976  1283   628     0  8389  11622  6363  47647 33071 86 13  0  1  5.00 120.4 11:17:26
  6  11   0   19050240      10796 16977    97     0     0 14361  42973  5563  34192 32122 81 17  0  2  4.62 111.3 11:17:27
  8  11   0   19049775      10704 14122  2296     0     0 16544  18781  4727  17908 28365 85 14  0  1  4.71 113.4 11:17:28
  3  16   0   19049791      10758 16861  1399     0     0 17468  20331  5714  23736 29753 81 17  0  2  4.43 106.7 11:17:29
  6   9   0   19049410      10842 13262    28     0     0 13524  15228  5683  27415 25719 75 16  2  7  3.84  92.5 11:17:30
  7  18   0   19049421      10776  3159  2324     0     0  4760   5388  1846  18286  9474 77  7  3 13  3.53  85.1 11:17:31
  5  19   0   19049422      10637  3774   634     1     0  4220   4854  3565  26489 11654 80  7  6  7  3.66  88.1 11:17:32
  0   0   0   19049423      10692  3165     8     0     0  3228   3634  2831  19911  8719 79  6  6  9  3.58  86.3 11:17:33
  7  14   0   19049692      10613 12350   971     0     0 13574  15363  4841  32162 25576 85 14  0  1  4.29 103.3 11:17:34
 10  10   0   19049628      10804  9855  2179     0     0 11898  13431  3764  12747 22990 83 14  0  2  4.10  98.8 11:17:35
  6  16   0   19048700      11440  7508   152     0     0  7440   8292  4032  35291 25988 86 12  0  2  4.44 107.0 11:17:36
  6  18   0   19054224      10914  9375  3100     0     0 17225  20303  4621  31231 31954 87 12  0  1  5.10 122.9 11:17:37
 14  11   0   19058459      10781 14799  9437     0     0 28163  32285  5057  25853 31991 88 11  0  1  5.89 141.9 11:17:38

# vmstat -v
             20971520 memory pages
             20286310 lruable pages
                32162 free pages
                    4 memory pools
              4698583 pinned pages
                 80.0 maxpin percentage
                  5.0 minperm percentage
                 95.0 maxperm percentage
                 18.3 numperm percentage
              3714940 file pages
                  0.0 compressed percentage
                    0 compressed pages
                 18.3 numclient percentage
                 95.0 maxclient percentage
              3714935 client pages
                    0 remote pageouts scheduled
              1242466 pending disk I/Os blocked with no pbuf
               820283 paging space I/Os blocked with no psbuf
                 2484 filesystem I/Os blocked with no fsbuf
                  443 client filesystem I/Os blocked with no fsbuf
            257814289 external pager filesystem I/Os blocked with no fsbuf

and the number of "external pager filesystem I/Os blocked with no fsbuf" is growing every minute with 20-30 blocked operations.

# while true; do date; vmstat -v | grep external; sleep 10; done
Wed  9 Mar 11:19:24 2011
            257815036 external pager filesystem I/Os blocked with no fsbuf
Wed  9 Mar 11:19:34 2011
            257815036 external pager filesystem I/Os blocked with no fsbuf
Wed  9 Mar 11:19:44 2011
            257815036 external pager filesystem I/Os blocked with no fsbuf
Wed  9 Mar 11:19:54 2011
            257815036 external pager filesystem I/Os blocked with no fsbuf
Wed  9 Mar 11:20:04 2011
            257815036 external pager filesystem I/Os blocked with no fsbuf
Wed  9 Mar 11:20:14 2011
            257815045 external pager filesystem I/Os blocked with no fsbuf
Wed  9 Mar 11:20:24 2011
            257815058 external pager filesystem I/Os blocked with no fsbuf
Wed  9 Mar 11:20:34 2011
            257815058 external pager filesystem I/Os blocked with no fsbuf
Wed  9 Mar 11:20:44 2011
            257815087 external pager filesystem I/Os blocked with no fsbuf
Wed  9 Mar 11:20:54 2011
            257815087 external pager filesystem I/Os blocked with no fsbuf
Wed  9 Mar 11:21:04 2011
            257815087 external pager filesystem I/Os blocked with no fsbuf
Wed  9 Mar 11:21:14 2011
            257815087 external pager filesystem I/Os blocked with no fsbuf
Wed  9 Mar 11:21:24 2011
            257815087 external pager filesystem I/Os blocked with no fsbuf

i'm going to increase the value of j2_dynamicBufferPreallocation, which is equal to 16 now.
Can you suggest, how to determine what should i set for j2_dynamicBufferPreallocation ?

that largely depends on your workload - on our systems it is usually 128 or 256.

You should try to find out which volumegroup needs all the filesystem buffers that you dont have - you can than add buffers via lvmo command to just this volumegroup ...

Regards
zxmaus