Your system looks OK to me.
Pls update your kernel patches (esp Recommended cluster) to the latest of Solaris 10. There's much bug fixes to 118833-24 . No worries..
:pAnd not to forget, as a rule of thumb, swap device should be configured double the size than of your physical memory. (16384)
V440 is a stable server. Yours should keep going.
As per Top i am using 7 gigs of RAm.. now the moment i run some thing else then the current applications it page outs increases significantly from almot "0" to 4ooo +
That is causing the new processs to take more time ..
That is a very old rule of thumb which I no longer agree with. Memory is so cheap and plentiful these days that you should hardly ever require swap, if you do it is a sign of problems. I generally don't ever configure more than 4GB of swap, even on a 64+GB system.
In fact, if your physical memory is alot, of course Im not asking you to give 128 GB for your swap. But you will t least need 16Gb of swap for a 64 0r 128 configuration minimally.
This is the architecture of Solaris. You will need additional swap for your paging and the space for your coredump(which might be Huge in size). You won't want your system to crash further after a system panic, would you?
incredible - of course your dump could be rather large but i run m9000's and have never seen a core file greater then a few gigs tops. we are talking about 1-4 TBs of memory and 64 sparc64 VI (128 cores) processors.
For your info M series is a collaboration of Fujitsu with SUN and their architectures do differ.. Though you may find M9000 is working like a 6900 system..
If its gonna be a "SUN" mid-range or highend servers, what I say still makes sense.
Yes, I agree with this. The old 2 pounds of swap for every pound of memory was back in the old days when memory was expensive and it was cheaper to use disk swap.
If you have 7 GB of RAM, something is really wrong if you need 14GB of swap.
4GB of swap is more than enough, good post Annihilannic. Thanks.
There is a lot of infomation in this thread which is not valid. Almost everything Incredible has stated in this thread is incorrect in terms both of system configurations and reasons for choosing swap size.
A few points worth note.
You don't need to allocate any swap space to deal with core dumps, and have not since Sorlais 8.
Solaris will never go into a panic-reboot cycle as a result of not having savecore space. It will simply not save a core dump if it has no space.
Twice memory as swap is no longer a good choice unless you really can't afford to upgrade.
If you have a lot of pagout you do not have enough memory, it's as simple as that.
You do not need minimum 16GB swap for 64 or 128GB of memory, but you may need to have more swap if you have applications using ISM (Intimate shared memory) or DISM (Dynamic Intimate Shared Memory) such as Sybase or Oracle databases.
There is no reason to treat an M-Series differently from any other (SPARC) Solaris box.
In summary you can get by to some extent if you don't have enough memory by adding swap, but it will hurt performance. Ideally you should have enough memory to run all your applications in memory, and the general rule of thumb nowadays is about 30% of memory for swap but there are more detailed reccomendations in the Solaris documentation.
Jim Laurent at Sun wrote a blog on this topic about a year ago, which you could look up.
Glad you asked about ZFS, I was about to do the same.
The OP has 8 GB of RAM, 1.5 GB is used by processes while 6.5 GB is used by the kernel. This per se isn't a problem. Unused RAM is wasted RAM anyway.
One major change ZFS introduced compared to UFS is the page cache is not used to cache file content. Kernel memory is used instead . This dramatically increases the kernel RAM usage metric but has no real consequence as this cache memory is still free memory from a system's viewpoint.
On the other hand, if the OP isn't using ZFS, then there is a problem to investigate further. It is particularly a problem because the kernel seems to be already larger than the swap space and might grow even bigger before a potential panic. Assuming the dumpadm intricacies are unknown to the system administrator, a crash dump might be truncated and then be partially or, worst case scenario, wholly unusable.
The reason why I was picky about that point in previous postings.
Yes, indeed that was the reason for asking. The behavior you describe is was is expected from the ARC cache, however it is worth noting there have been a number of bugs related to the cache memory not being freed quickly enough, so it doesn't always work the way it should resulting in unnecessary paging. I have actually run into this issue, but I believe I have only seen it on x86 machines. It is possible, and advisable, to limit the ZFS cache for this reason if there is a possibility that it will interfere with other operations.