Regarding AIX Memory Utilization usage calculation

Hello,

Am working on small program that used to calculate the memory usage of AIX servers. Am using svmon -G command to get the memory usage. For example, consider the following output.

$ svmon -G
               size       inuse        free         pin     virtual   mmode
memory      1957888      670177     1287711      411476      512709     Ded
pg space     131072        2929

               work        pers        clnt       other
pin          350964           0           0       60512
in use       512709           0      157468

PageSize   PoolSize       inuse        pgsp         pin     virtual
s    4 KB         -      269713        2929       85684      112245
m   64 KB         -       25029           0       20362       25029
$ 

Am using "in use" value at the 6-th line number as memory usage. It gives 26 % as a memory usage. Also am using /bin/ps -eo comm,pmem,args to get the all the process memory usage to confirm with previous calculation.But value calculated by both methods does not match.

Please advise.

Use the

code 

tags for code to preserve spaces.

In a a VM system, grace time and similar VM stats are a better measure of paging activity, which indicates how intensely the free ram pool is used.

Maybe you should also use the "k" option to take account of kernel processes?

/bin/ps -keo comm,pmem,args

As I posted earlier here - a better "svmon -G" command looks something like this:

# svmon -G -O unit=auto
Unit: auto
--------------------------------------------------------------------------------------
               size       inuse        free         pin     virtual  available   mmode
memory      512.00M     501.79M       10.2M     246.11M     662.28M      3.49M     Ded
pg space      1.50G     196.83M

               work        pers        clnt       other
pin         219.99M          0K          0K       26.1M
in use      484.54M          0K       17.3M

If you have additional questions about svmon and/or analyzing AIX memory please ask general questions there.

Hope this helps your initial question!

I noticed the amount of pinned memory. Have you taken shared memory segments into consideration? Use ipcs to analyse these.

I hope this helps.

bakunin

Yes, good old shared memory is used by the traditional UNIX IPCs, including semaphores and queues. I am not sure why these have to be pinned. Pinning is usually for peripheral/I/O support or the paranoid.

You can do similar things without shared memory or pinning using mmap() and files. An area of a file can be mmap()'d by all related processes, no root required. The content is durable through boots, too!

Ahem, "pinned" is not "shared". It is just that most of the standard applications which use pinned memory (foremost Oracle DB) also use shared memory segments to a large extent. Therefore i took an "educated guess" that maybe the presence of pinned memory hints at such a software being used. Why they do it that way? Ask Oracle! I just administrate machines running their product.

I hope this helps.

bakunin

Yes, pinned means out of the pool of dynamically mapped pages. They can be shared or not depending on what they are! Oracle is paranoid, or doing a lot of I/O. If you use it you will not lose it!

Guess I need to write a program to find out definitely what system calls are being used to share memory segments on AIX.
IIRC (if I recall correctly) - shmap()/mmap() - pins a file to memory. And misuse by applications can have a tremendous impact on memory management. And "way back when - AIX 4.1.4 and 4.1.5 if I recall, shmap()/mmap() were null operations on AIX (for POSIX they only need(ed) to be defined ("standards guy" might verify this) - i.e., there was/is no implementation definition. And why a null operation? Because AIX was already caching files in memory by default.
Note: currently (since a patch in AIX 4.2 or 4.3) expected shmap/mmap behavior is implemented in AIX - so also since "way back...".

Again, on AIX - the preferred tool for memory analysis is svmon .

I see many systems have a malloc option that is mmap() based (uses no swap, but sometimes temp files are on /tmp and so is swap!). Later Solaris uses mmap() on FILE* input for flat files -- no buffer. I'd like to see C/C++ APIs for no-root semaphores and better queues like MQ, TIB in mmap(). I suppose JMS is there already.

It seems like mmap() is too little studied. It is the door to an additional dimension in application resources -- control of your own VM. With it, you can exploit whatever RAM is available in a small program. Now if I could just reposition pages in files without data area read-write. I suppose the disk defrafmenter will just do the read-write later! That's IT life! You think you have an interrupt, but down in the kernel someone else is polling.