Intimate Shared Memory (ISM)

Dear all,

Anyone could help to explain me about ISM? I use SunOS 5.10 and have 3 questions here:

  1. Is it true that the ISM value will be set to TRUE by default on either global or container box of Solaris?

  2. Is it true if the ISM is TRUE, then the swap to disk (virtual memory) won't work (vmstat command will result 0 value in column po

  3. What are the effects if I disable the ISM (set it to FALSE).

Thanks.

What boolean "ISM value" are you referring to ?

Hi,

I refer to parameter USE_ISM. I ever read that this value is by default is being set to TRUE if it is not mentioned in /etc/system. Thanks.

There is no such parameter in Solaris so setting USE_ISM in /etc/system would have no effect.
You might be confusing this with an oracle parameter (init.ora).

Oops, my mistake.

I mean parameter _SHM_USE_ISM in /etc/system for Solaris, and parameter USE_ISM on initSID.ora for Oracle.

You're right, I have Oracle 10g on Solaris machine. I didn't find those parameters on both file system so I assume the ISM feature is ON.

What I know is that ISM doesn't support paging to disk. And I read that Oracle 9i supports DISM which supports paging to disk. So I'm wondering how to ensure whether my Oracle uses DISM feature or not. Do you know how to check this?

If my Oracle not in dynamic mode, since I have a limited resource of physical RAM, I'm afraid that this will cause system hang when simultaneous update executed. Your advice will be helpful. Thanks.

If I recall correctly from my VOS days, ISM and DISM usage by Oracle is controled by the value of the SGA_MAX_SIZE tunable in init.ora. If it is unset, then you will use ISM. If set, then you will use DISM.

ISM, if I remember right, will cause pages of the Oracle SGA to be locked in memory. Other pages are still pagable.

If you set SGA_MAX_SIZE larger than your inital SGA then DISM is enable and ISM is disable. DISM can cause some unexpected performance issues...

Could you help explain a bit why DISM can cause performance issue?

I tot with DISM, it is possible to swapped out the pages to disk so when I running out the physical RAM, I still can use the VM from harddisk?

Your guide will be helpful.

Thanks.

I'm not sure how to interpret your mean about paging (I assume you really meant paging and not swapping.. they have different meanings in a Solaris system).

With DISM, deciding which pages in the SGA are pagable is managed by the Oracle. So things can get interesting if there is pressure on the page cache (physical memory) by things not handled by Oracle. It may make decisions that cause poor performance. So, yes, you will use the swap (if that's what you mean by VM from harddisk) areas more than you'd like.

You should really only use DISM if you have a real need to change the SGA on the fly. That's my opinion and I'm sticking to it... unless someone comes up with a better one :slight_smile:

Thanks for your explanation.

I conclude with DISM the swapping to VM (disk) is still function. I'm asking this question because my swap on container doesn't work out. With command vmstat -5 the po column is 0 even when the memory is full in use. I tot the DISM/ISM is the cause of this anomaly. I guess I have to find out another cause then..

According to the documentation, these statistics are only reported at the zone level when a processor set is bound to that zone.

Hi,

And so far there is no solution for this? So we need to upgrade physical RAM?

Could you pass the document to my email uk.maniac1@yahoo.co.uk. Thanks.

A solution to what problem ?
vmstat not reporting paging activities with zone granularity ?

The documentation is on your host:

man vmstat

What metric do you use to justify that your memory is "full"? If your memory was full from a Solaris point of view, you would first start paging then swapping and performance would go bad. You would also probably see a non zero b count in vmstat among other things..

To jlliagre:
I meant the solution why the swap doesn't work on Solaris zone.

To jp2542a: My application on zone "shouted out" the malloc failed due to out of memory, that's the obvious metric I used. I used vmstat to check resource stat and found that my zone doesn't use the allocated swap area on disk.

I guess my problem is clear that I'm running out of memory and the swap to disk doesn't work out. I tried to post and searching for a solution but I got back and forward explanation instead

Instead of vmstat, you should run "prstat -Z" on the global zone to figure out how the memory is used by your zones. There is no dedicated swap for zones, the virtual memory is global.
Have you capped the physical, swap or locked memory for that zone ?
What Solaris update are you using ?

Ok, I'll use the prstat -Z for the next slow response time. Since this doesn't occur every day.

I already use capped for physical, locked, and swap.

I'm not sure that we use any solaris update, currently the OS version is sunOS 5.10.

I don't recall if I ever mention this, but the swap -s in my global zone is works fine, the value of pages swapped in/out is not 0, meanwhile in container (in which the running out of memory occur) the value of pages swapped in/out from command swap -s is 0.

Thanks.

Ah! Malloc failed. That means you don't have enough swap to back up the allocation request (configuration issue) or you ran out of vitual address space (most likely a programming issue).

If you are capping memory, it is to be expected you ran out of it faster than in the global zone.
Can you tell more about the cappings you set ?

Hereby the capped-memory setting in my zone:

capped-memory:
        physical: 8G
        [swap: 16G]
        [locked: 8G]

I set the limit to unlimited.

Thanks.

What are your kernel / project settings within your zones ??

whats the output for :-

projects -l

and

zonecfg -z [zonename] info 
# pooladm

Sounds to me you need to up a few of your project settings.

SBK