Unable to find 8 gb of memory

I 've one box with 16gb of RAM and top, vmstat showing 8712M free , i 'm unable to find which process is eating up rest of the memory , the system is not running anything at the moment.

try top or prstat -a command to check which process is using memory

i already said i used top .. prstat -a o/p is following .. can u tell me which one is the culprit

   PID USERNAME  SIZE   RSS STATE  PRI NICE      TIME  CPU PROCESS/NLWP
 14227 root       11M 9880K sleep   59    0   0:00:59 0.1% ldmd/13
 14270 root     3944K 3656K cpu23   59    0   0:00:00 0.0% prstat/1
 13842 root     6656K 5192K sleep   59    0   0:00:00 0.0% sshd/1
 13893 root     1760K 1480K sleep   59    0   0:00:00 0.0% sh/1
 13913 root     2768K 1968K sleep   59    0   0:00:00 0.0% telnet/1
  1252 root       11M 8944K sleep   59    0   0:00:39 0.0% avagent.bin/8
  1200 root       14M   11M sleep   59    0   0:03:15 0.0% naviagent/2
   890 root     2736K 1872K sleep   59    0   0:00:03 0.0% vntsd/8
   894 root     4656K 1536K sleep   59    0   0:00:00 0.0% sshd/1
   856 root     2632K 1472K sleep   59    0   0:00:00 0.0% in.rarpd/3
   926 root     3936K 2224K sleep   59    0   0:00:00 0.0% rpc.metad/1
   896 root       10M 3968K sleep   59    0   0:00:21 0.0% snmpd/1
 14202 root       19M   17M sleep   59    0   0:00:04 0.0% fmd/26
   697 root     1768K  216K sleep   59    0   0:00:00 0.0% efdaemon/1
   853 root     1720K  880K sleep   59    0   0:00:02 0.0% utmpd/1
   817 root     2568K 1024K sleep   59    0   0:00:00 0.0% sac/1
   877 root     1760K 1472K sleep   59    0   0:00:00 0.0% sh/1
  9127 root     3496K 2704K sleep   59    0   0:00:00 0.0% bash/1
   645 root     3112K 1048K sleep  100    -   0:01:44 0.0% xntpd/1
   916 root     3976K  776K sleep   59    0   0:00:00 0.0% mdmonitord/1
   402 root       11M 8872K sleep   59    0   0:01:42 0.0% nscd/30
   849 root       12M 7248K sleep   59    0   0:00:15 0.0% inetd/4
   841 root     2968K 1016K sleep   59    0   0:00:00 0.0% ttymon/1
   425 daemon   5216K 3192K sleep   59    0   0:00:08 0.0% kcfd/5
   716 root     3328K 1208K sleep   59    0   0:00:00 0.0% cron/1
   435 root     8616K 6680K sleep   59    0   0:03:10 0.0% picld/53
   415 root     6376K 1928K sleep   59    0   0:00:00 0.0% syseventd/15
   811 daemon   3528K 2672K sleep   59    0   0:00:00 0.0% rpcbind/2
 13916 root     6656K 5192K sleep   59    0   0:00:00 0.0% sshd/1
   815 root     2808K 1536K sleep   59    0   0:00:00 0.0% rpc.bootparamd/1
   365 root     8184K 4976K sleep   59    0   0:00:06 0.0% devfsadm/12
   843 root     4200K 2656K sleep   59    0   0:00:01 0.0% syslogd/9
     9 root       12M 8824K sleep   59    0   0:01:06 0.0% svc.configd/15
     7 root       20M   17M sleep   59    0   0:00:23 0.0% svc.startd/13
   396 root     2976K 2000K sleep   59    0   0:00:00 0.0% drd/3
     1 root     3016K 1120K sleep   59    0   0:00:04 0.0% init/1
 NPROC USERNAME  SWAP   RSS MEMORY      TIME  CPU
    46 root      103M  128M   0.8%   0:13:57 0.1%
     6 daemon     36M   38M   0.2%   0:01:10 0.0%



What says:

kstat -n system_pages | grep "\<p"

?

bash-3.00# kstat -n system_pages | grep "\<p"
name:   system_pages                    class:    pages
        pagesfree                       1084562
        pageslocked                     925088
        pagestotal                      2028340
        physmem                         2058787
        pp_kernel                       1029524
bash-3.00#

but what does it mean ?

It means you have 15.70 GB visible to the OS, of which 8.27 GB are free and 7.85 GB used by the kernel.
That might just be some harmless cache if you are using ZFS or pages locked by some (even not running) applications using shared memory.

ldmd is running. Do you have ldoms configured on this system?

Yes i 'm using ZFS, And can u plz tell how did u calculate the memory i was not able to get that .. but yes the stats u gave were correct as per top.

The same thing is happening with my Ldoms too they start with 3.5 gb free mem and end up with 1.5G free though nothing is running in them.

If you are running ZFS and LDOMs, what is the problem you are trying to fix ? Having 8 GB free isn't a problem per se.

About calculating, the numbers reported by kstat are in pages, and pages are 8 KB on SPARC.

In the SunOS/Solaris environment, main memory/physical memory should really be called the page cache. Think of it as cache of storage rather than program/data memory. To Solaris, any empty page is wasted faster-than-disk storage. Over time, the amount of free memory will settle at some value bigger than lotsfree (google it). If half your memory is free, then you don't have any pressure on the page cache. So you are chasing numbers rather than fixing an issue.....

Thanx for info Jlliagre , its quite helpful just to add we can find how much memory zfs is using with following command

kstat zfs:0:arcstats:size which give above 5G in my server. But isn't there any way to tune zfs not to use that much memory ? or put its cache somewhere else

Yes but what's the point to tune for poorer performance when you have plenty of RAM available ?
This ZFS memory is actually available should demand arrive from applications.
Unlike the ZFS one, UFS file cache is reported it as free memory so it doesn't trigger this kind of questions.

Like what ?

you can cap the memory ZFS uses. also you can buy some ssd disks and use them as cache... but why would you do this if you've 8gig of free memory?

if you are using zfs that is where your memory goes. By design zfs caches memory. It is called the arc cache. It will release memory based on system needs but is going to take as much as there is avaliable for its cache. It is recommended not to modify this and let zfs manage it for you, however you can adjust it by modifying your /etc/system file to minimize the amount of memory that zfs can reserve for its arc. If you want to see how much zfs is using go to this site and get arcstat.pl and run it on your system Monitoring ZFS Statistic - Roman Ivanov

the output will look like so:

# ./arcstat.pl
Time read miss miss% dmis dm% pmis pm% mmis mm% arcsz c
14:20:55 343M 36M 10 34M 10 2M 10 1M 0 10G 10G
14:20:56 11 0 0 0 0 0 0 0 0 10G 10G
14:20:57 3 0 0 0 0 0 0 0 0 10G 10G
14:20:58 6 0 0 0 0 0 0 0 0 10G 10G
14:20:59 26 0 0 0 0 0 0 0 0 10G 10G
14:21:00 6 0 0 0 0 0 0 0 0 10G 10G

This shows me that zfs on my system is cacheing 10g of physical memory. I would be willing to bet you will find your 8g there.

Thanks everyone .. i know now where my memory is :smiley: ..