Re-start of RHEL server

I had a query that should the RHEL servers in production environment be re-started say every 2-3 months so that the cache is cleared?

I hope, my question is clear that should the Red Hat Linux servers be restarted periodically.

Please revert with the reply to my query.

Regards

What cache are you talking about?

Thanks for your answer. I meant, cache which is used by the server.

Unless your server is experiencing behavioural, resource or performance problems which a reboot would remedy, or you are specifically advised by a software vendor whose application is running on the server that a reboot is necessary, I see no season why you should systematically want to reboot a production server.

Sure, if during a regular maintenance window, when an opportunity arises and during which user activity is restricted (and if it will help you sleep at night) go for it.

Caches are generally good things, and so rebooting (in the hope of) clearing them is somewhat futile.

Cache is a very good thing. It improves the performance of the OS. Do not reboot your server to clear cache.

if memory usage reached 100%, can we clear the cache is best option or reboot (in prod environment)

Just because your memory usage is near 100% does not mean you have a problem with your system. Linux, by design, attempts to reserve a lot of memory for itself in advance of actually needing it. . The more crucial parameter is the amount of page swapping that is occurring. A large amount of swapping indicates that you need more physical memory.

If you really want to free up some memory space, by clearing out free page caches, directory entries and inode caches, try the below:

for i in 1 2 3; do sync; sleep 1; done; sysctl -w vm.drop_caches=3

Do not expect to see huge free memory instantly though.

Reboot should not be the first troubleshooting step, especially for Un*x.

If your server's memory is being consumed almost 100% constantly, check free -m and concentrate on used column for -/+ buffers/cache: . This tells you how much memory is used by the applications apart from the cache and file system buffers.

If that amount is too high, run sar -B and concentrate on majflt/s field. This tells you how many major page faults (when kernel does not find a page in RAM and incurs reading pages from disk swap area) are occurring per second.

Compare that data with the data gathered when your system had reasonable free memory. If you see differences are high, check which processes are eating up memory, which can be found by issuing top and sorting the RES column.

Or it means you have a poorly coded application that has a memory leak.

:slight_smile:

If you need to find which process have the most memory, consider looking at the output from:-

ps -el | sort -n -k10 | tail

This shows the heaviest memory users. The allocated memory size is in column 10 with the PID & PPID in columns 4 & 5 respectively.

I used this to prove that a new Cobol routine in development (same server as production) was hammering the memory and the code was amended before being released to production.

You don't really need to worry about real memory being 100% full, that's quite normal even if you can't see what's seeming using it. It saves the server from performing real I-O to get frequently used files. You need to think about swap more than anything else from a performance view. Memory being swapped to/from disk is costly in processing terms.

You can list swap in use with vmstat A bit in the manual page states that under the memory section the heading swpd is "the amount of virtual memory used."

I hope that this is helpful.

Robin
Liverpool/Blackburn
UK

No need to reboot for cache issue. find exact problem.
I do reboot when there is some upgrade or some specific issue that really addressed well for a reboot.
Some of my servers are running up for 2 yrs :slight_smile: Sorry I didnot patch the OS for some reasons

There are two kinds of "cache" that can and will consume all your free memory - and it's a good thing.

The first is I/O buffers. This is a cache for moving data around. If you're moving 100 GB from one physical disk to another, that data is going through this I/O buffer space. It allows for more efficient multitasking. The read disk fills the buffer, and the write disk gets its data from the buffer. It's an efficient way of doing things.

The second is file cache. This is a filesystem read cache. Frequently accessed files are stored in this read cache to make accessing them quicker.

Both are tuned and managed automatically by the kernel. There is no user intervention required. There is no need to reboot to "clear" them. If you start up a new program that requires a lot of memory, the kernel will prune the I/O buffer and the file cache automatically to make room for your new program.

You can completely ignore I/O buffer and file cache for memory usage purposes. Count only the system + user space. Take your "used" total, and subtract file cache and buffer space, and the resulting number should = system + user.

You are having the Linux Newbie Memory Freakout.

Cache memory is as good as free memory.

If your system has 99% cache, it has 99% free.

Take a deep breath and relax. Everything is fine.

Memory is under VM mnagement, so it is no longer much of an issue. Sometimes excess paging is, or lack of swap space. Mostly, memory obsessions are left over from the trauma of primitive mainframe days. So many damaged souls!

Now on a 64 bit CPU you can mmap() a 100 TB file into your memory space and walk around in it like it was memory, which of course it is as you touch pages. In fact, you can seek up in an empty file 100 TB and write something to a page, and have a 1 page file. As you write the lower pages, the file allocation goes up. Hello sparse array! The rest of the space is generated null characters, if you read where you have not written.

DOS and older Windows were also quite trauma-inducing. 4 megs of ram which you get 640K of(the rest of it serves mysterious purposes to even more mysterious programs), booting to an empty prompt with 580K left, but your fancy program wants 583. How is 583 even possible!? Time to flail madly and change everything!

Hey, in 1960, 4 k words used to be the size of a small refrigerator, cycled in 4 microseconds. Power supply: external motor-generator. But core held the data through powerdowns, so in place of a reboot, just restart at the right address.

Blimey, we'll be taking this back to a mercury column memory in a minute, then back to man-with-a-flag.

Don't get me started! I only go back to the H800 days: germanium pnp transistors, -5 for a one, pre-printed-circuit, neon light panels, 10 track 555 bpi 3/4 inch 120 IPS tape drives, reels held on by vacuum. There were some odd devices. I though the CRAM was scary: a huge box of plastic cards with magnetic media attached to a drum at speed and read and written like a disk, then removed at speed and shuttled around to the back of the box. Unisys, I think. About as goofy as the tape drives with no reels, just wide tape and deep columns, with little tape cartridges in a huge, robot random access magazine. The rotating film drum with tall light bulbs and many photocells to say phrases was pretty amusing, until you had to buy a replacement bulb $$: 63 x 3 words if you bought it with 3 read stations, else wit just one, 63 words of .5 second (repeated on the film), and 3 words was a phrase, or revolution every 1.5 seconds. Diners Club bought one that said, among other things, "Hold for Credit Manager". I have a 30 inch diameter, 1/4 inch thick disk from CDC in my basement wanting to be a small coffee table. They fit 10 above and 10 below the motors at each end of a huge cabinet to make 300M x 6 bit characters of storage in 1970. We buffed out disk head crashes with tube gauze and alcohol. How cheap is disk this week?