I have a linux system , which triggers a alarm if RAM memory goes above 70%... When i see top command i cant find any process utilizing much of memory .. Can anyone tell me what could be the reason for high memory utilization.. Bellow is the free command o/p:
The used number in free is the sum of memory used by applications, buffers, and caches, and not only that of the applications. Linux will always try to use as much memory as possible to cache files to reduce disk access (which is a good thing).
Thanks for your response...My query is who is eating up the memory... I need to get memory down in regard to not see critical alarms...Which is triggered now as memory is 80% used...
Yes, 80% of your memory are used (but not reserved). But really only about 13% are reserved by applications, with the other 67% being used as cache and buffer to avoid time-consuming disk reads. That is, those 67% are used to keep libraries and files in a fast access location (RAM is much much faster than any disk, even SSDs), and that space is automatically re-assigned should any application require it.
So in order to get the amount of memory reserved by applications ("used"), you'll have to substract the buffers and cache numbers from the used number.
I do understand about the cache and buffers as you said... But thats how its designed here... :(.. Will cleaning up of files solve my problem temporarily, say my root has 52% disk usage... So removing unwanted files is it a solution?
No, as Pludi described it so eloquently where i failed, your Linux system will always cache, however you cannot class this as unavailable memory.
You should class anything that is cache as free so your total amount of free memory is Free mem + Cached Mem.
It is not a design flaw, it is a monitoring flaw. You dont want to clear this down as cache gets overwritten by applications as pludi said when necessary just as it would write to free memory just the same, but keeping things in cache is obviously faster than having to go to disk each time.
No. How much of the disk is used has nothing to do with how much RAM is used. Files are only cached once they are read (and not automatically the whole disk), so it only contains files that have been used. And I doubt you'd be willing to remove files that are required.
Adapt the script/program that's sending out the alert, that's the best advice I can give you.
If you really need to get the cached value low you can
a) make the application on that box use direct I/O (recommended if its a database or another application that caches itself) or
b) tune your system to use less fs-cache which will most likely degrade that systems performace. Have a look at the sysctl utility for this and be sure you know what you're doing before using this approach.
Edit: I second pludis and Tommyks advice to adapt your monitoring script as the best solution to your problem...
Everywhere it does consider as used up memory, system understands it as 86% used up memory... and i assume my alarm is also triggered because of this.... I have to find out some way to fix it... Thanks a lot for your explanation
%memused is worked out based on %memfree of total RAM, although you may see it as used from the outputs, cached is by no means anywhere near to used. You cannot use this value for monitoring until your no longer using cache.
If you reboot your machine it will clear the cached memory down, however you can't reboot your machine every 5 minutes to keep that low as cache builds up again.
I have a server with 32GB RAM, its only using 2GB and 29GB is cached, showing 1GB free, however if an application requires more memory it will used cached memory before it goes into swap.
I think you need to think again about this, it is not a problem that needs fixing, this is your box running at its optimal performance. Your alarm is triggered due to a result that is not entirely representative of what your looking for.