RAM always used 100 %

Dear All,

One of my Linux Server which is a Production server. I see always RAM is used fully. Eventhough Swap space is available , the system is extremely slow.

I have even cleared the cache memory , but still not RAM is reduced.

Kindly let me know if there are any solutions to bring down the RAM used.

[root@sun1 ~]# free -g
             total       used       free     shared    buffers     cached
Mem:            11         11          0          0          0          0
-/+ buffers/cache:         11          0
Swap:           18          2         16

Thanks and Regards
Rj

Please post some basic information:

1) What Operating System and version are running?

2) What is the specification of the hardware:
Make, model, RAM fitted, number of CPUs?

3) How much disc space is allocated to swap?
... and how much is used?

4) Have you changed any kernel parameters? If so, what were the old and new values and the reasoning behind the change?

5) What database software and version are you running?
Have you changed any database startup parameters? If so, what were the old and new values?
(A common problem is misunderstanding the units of database parameters and accidentally allocating more memory than you have fitted).

6) How many clients? ... and how do they connect to this server?

  1. RHEL release 5.5 (Tikanga)

2.Linux sun1 2.6.18-194.el5 #1 SMP Mon Mar 29 22:10:29 EDT 2010 x86_64 x86_64 x86_64 GNU/Linux

  1. 8 Processors and each with 4 core processors,

  2. 18 G is allocated to swap and 4 gig is used , 14 gig available.

  3. No Kernel parameters changed.

  4. Oracle 11 g is installed and other Datawarehousing softwares OBIEE 11g is installed in it.

  5. There are no System clients for this Server.. Only Application Server clients are there which are to a maximum 50.

Pl guide me ..

Thanks and Regards
Rj

The Linux kernel is optimized to use as much RAM as possible. This is normal system behavior.

4 GB is not much RAM in a modern day system. Our web server uses 32GB. RAM is cheap. Buy more. Swapping to disk is slow. Don't design a server to use swap unless you want performance to slow down.

Get more than 4 GB of RAM, problem solved.....

Mem:  33011004k total, 28995776k used,  4015228k free,   348328k buffers
Swap: 19530748k total,    14212k used, 19516536k free, 24653588k cached

ya but it doesn't show any cache/buffer there... his system has 11G shown in free. if it is not expected to be using all of this RAM i'd have a look at top to make sure things are as expected. you cannot blame swapping for the slowness yet though. run vmstat 10 and it'll update every 10 seconds showing swaps in/out. paste a few lines of this.

but how did you 'clear the cache' ? having nothing cached will then mean more disk reads for file accesses, which is also going to slow you down. this will give you less RAM swapped out though, but you're really just moving the problem.

overall the easiest solution is adding RAM, unless you find you've improperly configured services, or non-essential ones, wasting your memory.

I agree with this. RAM is cheap; cheaper than hours of analysis. Heck, I have 4GB of RAM on my MBA which I only use for web and email; basically. Our basic server is now at 32 GB RAM...... so a running a server with an Oracle DB with only 4 GB of RAM seems "overly economical" to me..... It is cheaper to just put in more RAM than trying to analyze the issue to death... IMHO..

Dear All,

thanks for the information provided by you.

Thanks and Regards
JeganR

As everyone said, If you have Oracle, then 4GB for the server is not enough. It "could" have been enough without Oracle, though! Increase memory. :slight_smile:

Isn't that just throwing money at the problem rather than fixing it? What's to say that the memory they put in there isn't just going to be eaten up once it's available? That's like saying the solution to a full root filesystem is always more space. If an admin doesn't know enough about the internal architecture of their application, then they need to go out and learn it. Once you've gotten to the point of "There's no more resource optimization left to do and I know it for sure." then you're at the point where you just flat out need more memory. Even then I wouldn't phrase it as being a crutch, at that point you're just solving the problem the only way possible.

Slightly related example, where I work disk space for the Domino servers was nearly completely exhausted and the admin for the system kept telling everyone that we were heading for a cliff if he didn't get more space allocated from SAN. Well he gets a contractor to come in and turns out that many people have three or four different replicas of the same files, including people who haven't even worked there in years. Now the Domino Admin knows more about Domino, and can better contribute value elsewhere, the databases run more efficiently with quicker backup times, and we managed to avoid having to pay for more disk space than was actually needed. If he had settled for "disk space is cheap" then the problem wouldn't have been uncovered.

Point being that you shouldn't knowingly use something as crutch and then let yourself be surprised by the results down the road.

Sorry to be blunt, but please read the installation instructions from Oracle. You must tune the kernel.

Please also post the contents of init<sid>.ora file(s) from your system. A common issue is to not realise that the units of memory parameters in this file are in Blocks. Thus if your Block Size is say 8 Kb, you can accidentally allocate eight times your available memory to the Oracle SGA.

Hope this helps.

If the problem is "not enough memory", the solution is "more memory". It's certainly not the answer to every problem, but Oracle isn't small; if they say 4 gigs isn't enough I believe them.

How much does Oracle cost, versus a few more gigs of memory, anyway?

Can I get a reply to what I wrote? What I said in the post you're quoting is that until you know that you actually need more memory you need to do research, it's not exactly professional to say "Out of memory? Better install more memory, then." Because not all memory use is justified. I also explicitly said that it may come down to instaling more memory, but until you know that, you can't say that.

There could be a memory leak affecting them, there could be something misconfigured in their Oracle install, their specs for the this machine could have been way off, etc. For all we know if the OP slaps more physical memory in there it's just going to eat up that new memory and they're going to be back at square one only this time they have to explain to their boss why the fix they offered didn't actually fix the problem.

It's not four gigs, I don't know where that number came from, their free output shows 11 gigs physical and 18 gigs swap. A swap that big probably also points to a technical deficit, BTW.

Oracle databases do tremendous amounts of reads. Linux, in general, caches reads into memory. Unused memory is wasted. The disconnect here is one that people believe a system is supposed to behave a certain way and that may not always be the best case.

Forget how much memory is being used and ask whether performance is impacted. File system caching is dropped when an application uses the memory, but also, realize that keeping things in memory translates into faster lookups.