echo 1 > /proc/sys/vm/drop_caches a good idea?

Hi folks.
I work with several production servers, and I have seen in some Kernel Cache using most of the memory.

See this pic:

Do you think this is a smart choice? Remember these are productions servers and it is extremely necesary this does not cause any issues further.

sync; echo 1 > /proc/sys/vm/drop_caches

What is the value of (your) /proc/sys/vm/drop_caches now?

Hi Neo, thanks for the quick reply, I'm home now, and won't be back in my job until 15 hours, but I will let you know :slight_smile:

---------- Post updated at 06:58 AM ---------- Previous update was at 06:56 AM ----------

by the way swappiness is set to "60" if that helps, and this box is a database server (mysql)

---------- Post updated at 07:00 AM ---------- Previous update was at 06:58 AM ----------

What if the drop_caches was set to "0" ??

Hi Eric,

Actually, it really does not matter much, frankly speaking. I've experimented with dropping caches in Linux so many times on our production web server (LAMP).

Linux does such a great job at using all available memory, and reclaiming it when needed, so it is better to let Linux manage those things.

When you drop the cache (or caches), you will see the CPU load go up (sometimes way up) because the cache is gone. Available RAM goes up, but it does not matter because performance is slower because the cache is empty.

Then over time, if you leave the caches off, the performance will suffer because you are not taking advantage of the cache.

Linux tries to use all available RAM, so the caches will fill over time if you don't instruct Linux to drop the caches, and this is a good thing. You will see available RAM go down, but don't worry, it is available for applications when needed because applications take higher priority than cache.

You want Linux to use all the RAM. That is a good thing because the kernel is basically using all available RAM that is not used by applications (and the OS) for cache. Dropping caches has little positive effect on performance. In fact, it tends to have a negative effect. The reason is that you are not really making more RAM available to the apps, because the apps already have been given the RAM needed. You simply are dropping the cache, which degrades performance.

Regarding, swappiness, the same is basically true. Linux will dump the cache before swapping, as I recall, but I would need to read up on that again to see exactly how it works.

We have experimented with swappiness, and ours is currently set to:

# cat swappiness
20

... and FYI:

# cat drop_caches
0
1 Like

Unless you're trying to address a performance issue, I'd agree that there's no need to prevent any cache use.

Look at it this way: cached data will be discarded if anything else needs the RAM, and free RAM is wasted RAM.

Seriously: free RAM is wasted RAM.

Copies of disk data pages in RAM can't be a bad thing, as long as they are not 'dirty' (unrepatriated to their media page)! I am amazed it took so long. Is there such a thing as too much RAM, other than to a reliability model?

thanks a lot guys!