Interpreting Linux's free command output

I have two questions on Linux's free command. Below, I have provided output from my home laptop (fedora 26 ) which has 16GB Physical RAM and a production server (RHEL 7.4) which has 24GB RAM.

Question1. What exactly does the buffer/cache column say in free command's output ? buffer/cache is only 1GB in my home laptop but it is 18GB in production server below.

Question2. To know the free RAM available to the system, Can I trust the 'available' column rather than the 'free' column ?
In my home laptop, the 'free' column shows 13GB and available shows 14GB
But, in my production server, when the free command shows just 2GB , the available command shows 9 GB

My Home Laptop with 16 GB RAM (Fedora 26)

[sysadmin@keithspc ~]$ cat /etc/redhat-release
Fedora release 26 (Twenty Six)
[sysadmin@keithspc ~]$
[sysadmin@keithspc ~]$ uname -a
Linux keithspc 4.12.8-300.fc26.x86_64 #1 SMP Thu Aug 17 15:30:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
[sysadmin@keithspc ~]$
[sysadmin@keithspc ~]$ free -h
              total        used        free      shared  buff/cache   available
Mem:            15G        854M         13G        385M        1.0G         14G
Swap:          7.8G          0B        7.8G
[sysadmin@keithspc ~]$
[sysadmin@keithspc ~]$ free -m
              total        used        free      shared  buff/cache   available
Mem:          15939         854       14073         385        1011       14362
Swap:          8034           0        8034
[sysadmin@keithspc ~]$ 


A production Server (VM) with 24GB RAM (RHEL 7.4)

[root@manhprod187 ~]# cat /etc/redhat-release
Red Hat Enterprise Linux Server release 7.4 (Maipo)
[root@manhprod187 ~]#
[root@manhprod187 ~]# uname -a
Linux manhprod187 3.10.0-693.5.2.el7.x86_64 #1 SMP Fri Oct 13 10:46:25 EDT 2017 x86_64 x86_64 x86_64 GNU/Linux
[root@manhprod187 ~]#
[root@manhprod187 ~]# free -h
              total        used        free      shared  buff/cache   available
Mem:            23G        3.3G        2.0G         10G         18G        9.0G
Swap:          2.0G        1.4G        674M
[root@manhprod187 ~]#
[root@manhprod187 ~]# free -m
              total        used        free      shared  buff/cache   available
Mem:          23948        3471        2002       11006       18473        9194
Swap:          2063        1389         674
[root@manhprod187 ~]#

Hi,

Taking your questions in turn:

  1. Buffers and cache are essentially types of memory that, whilst they are in use, could be freed up if the system required memory for some other purpose. These are types of memory that are mostly used to hold things which either are yet to be written or have recently been read from disk, in order to accelerate file I/O. So if ever a process genuinely needs more memory than is sitting absolutely unused (which is the total in the 'free' column) more can be obtained by flushing out certain things from the buffers and/or cache.

  2. The 'free' column shows you the memory on the system which is genuinely 100% unused - so not in use by a process, and not part of the buffers, cache or shared memory pool. The 'available' column consists of the 'free' memory, plus whatever memory from other categories (mainly the buffers and cache) which could be easily freed up if required. So you can take 'available' as a trustworthy figure as the amount of free memory that applications could use if they needed to.

Hope this helps.

2 Likes

Thank You drysdalk.

So, when a new process is spawned or an existing process needs extra memory , will the memory be allocated from 'free' (100% unused) memory or memory allocated to Cache+Buffer ?

Hello,

I would imagine that under normal circumstances the kernel would choose to allocate memory from the free pool first, before flushing out buffers, cache or anything else. It all depends precisely on what kind of memory the process is wanting to allocate and for what purpose, but in general memory which is totally unused would be used up first, and then when free memory fell below a certain threshold the system would most likely become more aggressive about freeing up cached memory for actual use by running processes.

In either case, so long as memory is available, a process will be able to use it, so where exactly it comes from is normally not something you need to worry about or consider. If a process needs memory and it's available by one means or another, the system will allocate it.

1 Like

To be clear, buffers certainly count as "free" memory at need, the kernel is just careful which it uses first.

Your server has much more cache because it's doing much more work. Anything which uses the disk tends to transform free memory into cache, just in case it needs to access that same spot of disk again.

In principle: yes. You can change this behavior to some extent by setting kernel tuning parameters but for most purposes the default behavior does quite OK so that this is not necessary. Here, for instance, is a document describing some of the tuning possibilities:

http://docs.gluster.org/en/latest/Administrator%20Guide/Linux%20Kernel%20Tuning/

for a very short introduction what performance tuning is about you might also read this introduction i once wrote.

Before you try that or anything else in this regard: notice that the result can be really catastrophic! Experiment a lot (on a test system, of course), but treat it like open-heart surgery: this can do a dramatic lot of good if you know what you do but it can prove catastrophic results when you don't. Don't be shy, but be careful.

I hope this helps.

bakunin

1 Like

Lets say we have two machines with simplified memory status :

10 GB ram, 5 GB buffers & cache, 5 GB free completely.
10 GB ram, 1 GB buffers & cache (lets say you enforce those), 9 GB free completely.

A 64 bit program requests 8 GB of RAM.
How much latency difference is involved in both scenarios when request is made ?

Can someone from kernel programmer perspective say is memory fragmentation issue when dealing with extreme sizes ?
How much time is lost on traversing all those structures, see what to return etc ?

Thank you!

Regards
Peasant.

Good question.
kernel.org developers claim that freeing buffers and cache is fast.
But under some workloads this is not true.
The problem is not memory fragmentation. But a concurrent I/O that tries to fill cache and buffers significantly slows down the clearing.

No difference. The kernel won't stop everything and deliver 8 gigs when you ask, just marks that you're owed it. It will assign memory to you piecemeal when you actually start using it.

The difference is the speed.
It takes a second to clear the 5GB buffers&cache.
If there is high concurrent I/O it takes a minute.
In the second scenario the filling of 8 GB of RAM happens faster (or much faster).

There is a difference between filling (actual usage) and requesting. Linux serves RAM requests immediately without checking if this is really available, "unlimited overcommitment". For example a java application that claims 25 GB RAM will start, and later crash when it fills it with data.

In contrast, Solaris 10 has "limited overcommitment". For example only commits up to twice the available RAM; a java application that claims 25 GB RAM will only start when there is >13 GB available.