High memory auto recover

On centos 7 Server there is high memory issue have MySQL and Mongodb Server

Normally we restart mongodb when high memory alerts comes.

Is there any Command for shell script To identify the High Memory more then 90% mongodb auto restart

Hello,

This is something you could probably script, yes. But personally, this isn't the way I would go. Yes, when you get a high memory alert you can make the alert clear by re-starting the process that is consuming the most memory. If these alerts were a one-off thing or something that only happened very rarely, that would be fine, and would be a reasonable approach. But if this happens regularly, which it sounds like it does, then re-starting things isn't really fixing the problem, if you see what I mean - that's just making the problem go away, until the next time it comes back.

A better bet in situations like this is to figure out what causes the regular excessive memory usage, and do something to correct the problem. What is happening in MongoDB that is causing higher-than-normal memory usage ? A particular query, or an issue with a particular set of data ? Is a particular client making excessive use of the database at times ? Or is it in fact the case that the memory usage is actually entirely normal, and the system just needs more memory to be installed ? There are many possible answers, but these are just a few ideas for starters.

Either way, you'd be better looking at why this happens and fixing the problem. That may mean correcting any issues you find at the client or server sides, or adding more resources to the server if it just genuinely needs more RAM.

Hope this helps !

1 Like

Also make sure you have some swap space.

  1. it will keep your system manageable if there were a real problem.

  2. it will satisfy your "virtual memory" monitoring because there is some swap reserve. Won't help if you have separate RAM and swap monitoring, in this case consider a better monitoring.

Yes, you can write a shell script to monitor the memory usage and restart mongodb when it exceeds a certain threshold. You can use the 'free' command in linux to get the memory usage, and use the 'service mongodb restart' command to restart mongodb.

Also consider enabling the "log long queries" (sorry at the gym on mobile so I do not have the exact config syntax).

You can log these memory hogs and isolate the query in many cases as @drysdalk suggested.

Yes, but as @drysdalk advised it is better to find the root cause and fix the issue versus doing this kind of bandaid, restart.

Yeah I agree. This can help prevent the issue from happening again and also help you to understand what caused it in the first place.

Creating scripts to restart processes with these kinds of problems, without spending time to determine the root cause of the problem, is very poor system administration.

Now that I'm back at my desk

MySQL

SET GLOBAL slow_query_log = 'ON';

See for example below, for other configuration options:

You can also quickly check the myriad docs on the net to look at other ways to track down memory errors in MySQL and MongoDB, for example:

I do not recommend jumping to writing simple scripts to bandaid a potentially serious DB issue by killing and restarting the DB unless you have exhausted all other means to track down the root cause of the problem.

Thanks for reply

This is replica DB server, Load is due to Database Backup, I am considering if memory goes 90% it will restart mongodb.

Normally it comes 1-2 time in Day

So, that means if you are in the middle of a backup and the memory stat you are interested in goes to 90% you will kill the backup while it is in progress??

2 Likes

So how about either moving the backup schedule to quiet hours or try the really obvious thing and increase the RAM in the box to see if the problem goes away. It seems that if the backup executing causes the problem then increasing the RAM might well fix it. IMHO, the first thing to try.

2 Likes

What @Neo mentions here is the main danger with scripting service or application re-starts: unless you incorporate intelligence into them to detect the dates, times and conditions when the re-start should absolutely never take place (such as during mission-critical times like backups), they can end up causing more problems than they solve. A server running low on RAM is a far less dangerous thing to have in the long term than a server that never completes a full backup.

2 Likes

When you say you get 90% of memory utilized alert - does anything stop working during that time or ?
Are you monitoring free or available memory ?

Since, when backup is made, files are being read from filesystem and cached by system, so the filesystem cache can occupy that space, but those pages will be available to other processes if they ask for it.

Consider also limiting your processes (mongodb, mysql or backup process) via cgroups drop-in file or slice.
This way, you will cage the process and it will be killed automatically by OOM killer if it exceeds memory usage applied for it or/and it will throttle the CPU for process e.g 50% of one core or likes.

Systemd or docker container will then attempt to start the service or container automatically after such kill is made (this kinda depends on configuration, but in general it should work like that)
But yea, root cause here is important, not restarting stuff just because 'an alert came' without other symptoms such as swapping (causing slowdowns) or system not performing well in general.

Regards
Peasant.

This is the significant point asked by @Peasant

Why are you concerned about memory usage if nothing stops working?

Free memory is wasted memory. The kernel manages memory according to system load. Yes, other apps might slow down due to a constraint on available CPU cycles and if that hurts don't run the dB backup at that time.

High memory usage is expected if the O/S is doing its job properly.

1 Like

Actually, with Linux-based systems, the kernel is designed to gobble up all the memory available, so it's not unusual for Linux-based systems to show very little "free" memory but everything is running perfectly.

Honestly, I find this topic a bit disappointing.

Managing memory usage in both MySQL and Mongo DBs are done using DB configuration parameters which manage the in memory caches of queries, tables. and more. These are all tunable parameters. For example, see for example:

and

So, when MySQL and Mongo DB are having performance issues, we work to tune the DB parameters, NOT create scripts which kill and restart an improperly tuned DB.

This is why I'm disappointed in this discussion. We are so far off topic of what any novice DB admin would be doing when there is a DB memory issue.

We don't write scripts to kill (and restart) DB processes when the memory usage is out of whack, we adjust and tune the myriad parameter available to the DB admin designed for this exact purpose.

Please read the following. This is where you @kaushik02018 should be focused, not writing bandaid scripts in lieu of tuning your DB memory usage properly.

https://dev.mysql.com/doc/refman/8.0/en/memory-use.html

Thanks

2 Likes

This topic was automatically closed 10 days after the last reply. New replies are no longer allowed.