%memused is high

All

I see cpu and memory usage are under control, only cache gets high upto 99% using sar -r command

I did echo3 and cleaned the drop caches

do we need to monitor this , why my application going high upto 99.8% memused in sar command , this is cache memory which is going high

will this cause any performance issues like slowness ?

I understand Kernel should manage this , so do I need to clean up manually ?

Please suggest

No, on the opposite, using your RAM as cache is expected to improve your system performance.

No. Unused RAM is wasted RAM.

1 Like

Thank you jlliagre

What I noticed could be other coincidence
App was responding toooo slow
CPU and memory were fine
Only cache is close to the ram size in terms of %memuser it is 99.8 %

So I dropped cache and then users were able to login app and work

This is happening once in a week

Any suggestions to look at something else other than cache for slownesss ?
Is 99.8% cache usage ok ?

You shouldn't focus on a single metric and assume it is the key to performance. Better to provide everything useful for us to figure out what is going on on your system (hardware description, sizing, application used, full statistics output). In particular, not telling the fact users were unable to login in your first post didn't help.

1 Like
# lscpu
Architecture:          x86_64
CPU op-mode(s):        32-bit, 64-bit
Byte Order:            Little Endian
CPU(s):                4
On-line CPU(s) list:   0-3
Thread(s) per core:    1
Core(s) per socket:    1
Socket(s):             4
NUMA node(s):          1
Vendor ID:             GenuineIntel
CPU family:            6
Model:                 62
Model name:            Intel(R) Xeon(R) CPU E5-2643 v2 @ 3.50GHz
Stepping:              4
CPU MHz:               3499.810
BogoMIPS:              6999.99
Hypervisor vendor:     VMware
Virtualization type:   full
L1d cache:             32K
L1i cache:             32K
L2 cache:              256K
L3 cache:              25600K
NUMA node0 CPU(s):     0-3




paging  getconf PAGESIZE
4096


avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           2.85    0.05    0.51    0.07    0.00   96.53

Device:            tps    kB_read/s    kB_wrtn/s    kB_read    kB_wrtn
sdc               5.00       200.43        24.87   98583358   12234372
sda               0.47         1.97        26.50     970878   13033637
sdb               0.77        23.65        61.56   11632840   30280515
dm-0              0.08         1.41         0.15     692727      75933
dm-1              0.09         0.10         0.25      50168     122304
dm-2              5.84       224.08        86.44  110215182   42514888
dm-3              0.02         0.24         0.04     119390      21436
dm-4              0.09         0.05         0.47      26330     231148
dm-5              0.01         0.05         0.02      26430       9073
dm-6              0.00         0.01         0.00       4077       2060
dm-7              0.12         0.04         0.46      17428     225745
dm-8              0.17         0.01        24.98       3242   12288489
dm-9              0.02         0.01         0.07       5800      35801

Application is used to enter data and tracking , graphical representation

First off, you are running a hypervisor -VMware. No problems, but I do not understand the cpu queue length on a system that is doing almost nothing.

If your output ran from the base system, then nothing is going on system-wide.
Is this the base or one of the virtuals?

I am not well-versed with VMware, but there is a toolset, I believe, that will show what is going on on all of the virtuals and the base system as well.

1 Like

Thank you very much again

Yes I am on VMWARE , don;t have access to base

I would focus more on the swap/page rates. If you are swapping/paging because you have exhausted real memory, then you will start to feel the performance cost of swapping/paging. What output do you get from vmstat? You might try it with time & count paramters such as vmstat 10 5 giving you ten second intervals for a count of five, although the first is usually counted since last boot.

The columns you are looking for are under the swap heading, probably the si & so sub-headings, although the columns are usually skewed.

  • Swap in (si) is recalling from disk memory that was still needed, but least active.
  • Swap out (so) is writing to disk memory that is still needed, but least active.

Does this reveal anything?

You don't say what the services are that are degraded. If you have a database, that will have a configuration file where you can adjust various parameters, including memory allocations. If set too low, these can cause performance problems within the database. If set too high, they can cause problems for the OS. Most people assume that larger is better, but it has to be within the confines of the server you have. One item in particular is often referred to as resident or pinned memory which cannot be swapped. This is for the performance of the database but if you set it too high there may be insufficient left for the OS to perform other normal work, which can leave your database degraded too, depending on what is happening.

If you are worrying about the VMWare host, have you over-provisioned the memory of your guests? (if that is even possible) It's the same consideration for a server with a database on it in a way.

I hope that this gives you something to work with.
Robin

1 Like

I hear this over and over again, from the Linux community. Always use 100%, a usage under 100% is wasted RAM, blabla.

Unix Vendors do not think so.
For example the HP-UX buffer cache that is comparable. The default is 10% RAM minumum and 50% maximum. There is a whitepaper that recommends to tune the maximum to 70% or 80%. But they warn to not go over 90% because the system would respond slower to memory requests from applications.

1 Like

I suppose it comes down to the decision if memory is cleared when the particular process that owned it terminates, or if the OS keeps it in memory in case it is needed again soon.

Additionally, some OSes allow you to pre-read large data files with something like dd if=/path/to/bigfile of=/dev/null to cache the file for later access. This uses memory too but makes subsequent calls to the file faster, particularly for random access files such as Cobol data files or large CSVs where you are pulling out a specific record.

I'm aware that some OSes allow tuning to keep the memory empty, but I always leave it to be full, concentrating on swap activity as that is such a performance overhead.

Just my thoughts. Have I got it all wrong?
Robin

1 Like

Thank You Robin

Actually no swapping is not happening

procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
 r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa st
 1  0  81776 182316      8 12247504    0    0    58    34    2   16  4  1 95  0  0
 0  0  81776 181000      8 12248884    0    0     0   144 1481 3326  7  1 93  0  0

I checked my database config file , seems the buffers were set to min range

dynamic_shared_memory_type = posix
shared_buffers = 128MB

Hold on! If your application is a database then it is usually better to give most memory directly to the database (how this is done depends on the database used: in i.e. Oracle this is called "SGA").

The reason is that DB software makes better use of the memory than the OS is because it can load bigger parts of the DB into memory so that they can be accessed even faster than from disk.

DBs commonly use very specialised ways of accessing their files which circumvent the OSes caching completely anyways (so-called "direct I/O", "concurrent I/O", etc.) so that a reduction of system cache memory won't hurt the DB at all. If you tune a system for a DB as application - as a rule of thumb - you give so much memory directly to the DB that the system just doesn't begin to swap, regardless of how small the file cache will be this way.

I hope this helps.

bakunin

1 Like

This question is asked in the Red Hat forum so there is no doubt the OP is running Linux. The Linux kernel is designed to use all otherwise free RAM as cache with no penalties.

Note that under Unix and Linux, you can't really use 100%, the OS try hard to make sure minfree is left (min_free_kbytes on Linux), although minfree/min_free_kbytes are normally very small compared to the RAM size.

HP-UX might still has an problem freeing buffer cache memory but that's a design issue that should have been fixed if not already. Another System V implementation, Solaris, did it 17 years ago. On Solaris, the cache memory is reported as free memory and is freed almost instantly. See Understanding Memory Allocation and File System Caching in OpenSolaris (Richard McDougall's Weblog)

On the other hand, RAM allocated in kernel buffers, regardless of the OS, is much more difficult to be retrieved for applications so tuning can be useful here, for example when ZFS is used.

Back to the OP issue, he is running on a virtualized environment and has no access to the hypervisor statistics. The hypervisor might well lie about actual resources available to the kernel so anything is possible.

2 Likes