Memory leak

Hi all
I am using RED HAT 5.4, and i am getting memory uses problem. when i use "sync;echo 3 > /proc/sys/vm/drop_cache" command the memory will release after 2,3 hour memory show 95%.
pls suggest right way.
thanks

Linux uses every byte of memory available for buffers and caches .
95% usage is good , actually . I bet most of it used in FS cache .

but i have 11 more red hat boxes but they not uses such mount of memory.
when it use 95 % memory its working become slow.

Please post exactly what the problem is, not just "memory leak". Please post what numbers you're getting from what. cat /proc/meminfo would be a good start.

If it is a memory leak and not the usual "oh my god linux uses free memory for cache" freakout, you'll need to track down what's leaking memory, it's probably not "linux" that's leaking. ps aux | less would be a start, it'll show you a list of processes and how much memory they're using.

i uses ps aux | less but it shows. Zero in all process. and see below meminfo

# cat /proc/meminfo
MemTotal:      7805952 kB
MemFree:        127120 kB
Buffers:        300884 kB
Cached:        6484184 kB
SwapCached:          0 kB
Active:        6390728 kB
Inactive:       645928 kB
HighTotal:           0 kB
HighFree:            0 kB
LowTotal:      7805952 kB
LowFree:        127120 kB
SwapTotal:    16892336 kB
SwapFree:     16892152 kB
Dirty:             132 kB
Writeback:          28 kB
AnonPages:      251588 kB
Mapped:          50312 kB
Slab:           356272 kB
PageTables:      14256 kB
NFS_Unstable:        0 kB
Bounce:              0 kB
CommitLimit:  20795312 kB
Committed_AS:  1069076 kB
VmallocTotal: 34359738367 kB
VmallocUsed:     12008 kB
VmallocChunk: 34359726335 kB

check the free memory its all about use.

6GB of total 8GB used in cache . Looks memory is not a problem here .

Could you post it anyway? Also try scrolling to the right to see if there's anything off the screen.

 and see below meminfo 

# cat /proc/meminfo
MemTotal:      7805952 kB
MemFree:        127120 kB
Buffers:        300884 kB
Cached:        6484184 kB

This is the usual "oh my god linux uses free memory for cache" freakout. Cached memory counts as free memory. All that messing you're doing with vm modes is only hurting the system, it may actually be causing the problem, you should undo it. The problem isn't related to low memory.

i am new in linux, so pls guide me what is it. i have more other boxes but it not uses like this. pls tell me what to do next...

Cache is cache. Whenever linux uses the disk it stuffs disk contents into memory in case it needs them again. It gives them up as easily as free memory.

As for what's going wrong, tell us exactly what problem you're having with the system.

exactly problem is i have 12 Red hat servers. all the servers consuming 20% of memory except this one its use more than 95 % of memory. its working also slow compare other rest servers. it always use 95%, 96% of memory. when i use command "sync ; echo 3 > /proc/sys/vm/drop_cache". then it release the memory and after 2,3 hours it will again use full memory. why it happen
pls reply

We've already been over this. You just had the usual "oh my god linux uses free memory as cache" newbie freakout. Lots of memory being cached is completely normal, even expected behavior. If none of your other servers have any memory used as cache, they are the ones that have something wrong with them. Cache is good. Cache makes your system faster. Cache is also still "free" memory that your server will give up and hand over to programs should they ask for it.

Stop freaking out. Stop messing around with your VM modes -- disabling cache hurts your systems -- and undo the changes you did. (If you don't know how, just reboot your system.)

This problem is completely unrelated to memory. It has 6 gigabytes of free memory. You don't have to keep prodding at VM modes. The memory's fine, or was until you started messing with it.

What, exactly, is slow? Forget the memory already and tell us what performance problems you're having. What's your load average?

the load average is

#uptime
 22:44:16 up 21 days, 17:04,  1 user,  load average: 1.05, 1.08, 1.03

Something's occupying one core 100% of the time. Run top to see what it is.

Also:

top - 22:58:11 up 21 days, 17:18,  1 user,  load average: 1.00, 1.00, 1.00
Tasks: 246 total,   2 running, 244 sleeping,   0 stopped,   0 zombie
Cpu(s):  0.0%us,  0.0%sy,  0.0%ni,100.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Mem:   7805952k total,  7682840k used,   123112k free,   301988k buffers
Swap: 16892336k total,      184k used, 16892152k free,  6485460k cached

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
    1 root      15   0 10348  692  584 S  0.0  0.0   0:01.61 init
    2 root      RT  -5     0    0    0 S  0.0  0.0   0:01.02 migration/0
    3 root      34  19     0    0    0 S  0.0  0.0   0:00.14 ksoftirqd/0
    4 root      RT  -5     0    0    0 S  0.0  0.0   0:00.00 watchdog/0
    5 root      RT  -5     0    0    0 S  0.0  0.0   0:00.81 migration/1
    6 root      34  19     0    0    0 S  0.0  0.0   0:00.05 ksoftirqd/1
    7 root      RT  -5     0    0    0 S  0.0  0.0   0:00.00 watchdog/1
    8 root      RT  -5     0    0    0 S  0.0  0.0   0:01.03 migration/2
    9 root      34  19     0    0    0 S  0.0  0.0   0:00.05 ksoftirqd/2
   10 root      RT  -5     0    0    0 S  0.0  0.0   0:00.00 watchdog/2
   11 root      RT  -5     0    0    0 S  0.0  0.0   0:01.02 migration/3
   12 root      34  19     0    0    0 S  0.0  0.0   0:00.05 ksoftirqd/3
   13 root      RT  -5     0    0    0 S  0.0  0.0   0:00.00 watchdog/3
   14 root      RT  -5     0    0    0 S  0.0  0.0   0:01.06 migration/4
   15 root      34  19     0    0    0 S  0.0  0.0   0:00.01 ksoftirqd/4
   16 root      RT  -5     0    0    0 S  0.0  0.0   0:00.00 watchdog/4
   17 root      RT  -5     0    0    0 S  0.0  0.0   0:00.74 migration/5
   18 root      34  19     0    0    0 S  0.0  0.0   0:00.00 ksoftirqd/5
   19 root      RT  -5     0    0    0 S  0.0  0.0   0:00.00 watchdog/5
   20 root      RT  -5     0    0    0 S  0.0  0.0   0:00.62 migration/6
   21 root      34  19     0    0    0 S  0.0  0.0   0:00.01 ksoftirqd/6
   22 root      RT  -5     0    0    0 S  0.0  0.0   0:00.00 watchdog/6
   23 root      RT  -5     0    0    0 S  0.0  0.0   0:00.94 migration/7
   24 root      34  19     0    0    0 S  0.0  0.0   0:00.11 ksoftirqd/7
   25 root      RT  -5     0    0    0 S  0.0  0.0   0:00.00 watchdog/7
   26 root      10  -5     0    0    0 S  0.0  0.0   0:00.01 events/0
   27 root      10  -5     0    0    0 S  0.0  0.0   0:00.00 events/1
   28 root      10  -5     0    0    0 S  0.0  0.0   0:00.00 events/2
   29 root      10  -5     0    0    0 S  0.0  0.0   0:00.00 events/3
   30 root      10  -5     0    0    0 S  0.0  0.0   0:00.00 events/4
   31 root      10  -5     0    0    0 S  0.0  0.0   0:00.00 events/5
   32 root      10  -5     0    0    0 S  0.0  0.0   0:00.00 events/6
   33 root      10  -5     0    0    0 S  0.0  0.0   0:00.00 events/7
   34 root      10  -5     0    0    0 S  0.0  0.0   0:00.00 khelper
   35 root      10  -5     0    0    0 S  0.0  0.0   0:00.00 kthread
   37 root      10  -5     0    0    0 S  0.0  0.0   0:00.00 xenwatch
   38 root      10  -5     0    0    0 S  0.0  0.0   0:00.00 xenbus

Weird. Your system is 100% idle. That should be a load average of 0, not 1...

I don't suppose you rebooted your system or restarted any services? Is the problem still happening?

i am not rebooted my system or services.
if i use command "sync ; echo 3 > /proc/sys/vm/drop_cache" then it will show the full memory. after 2, 3 hour it will again use.

Is the problem continuing to happen?

You are NOT running out of memory already! The high cache isn't causing the problem! Stop messing with your VM modes!

so sir,
where is the problem ? is that everything is fine ?

I'm not saying everything is fine. I'm saying that whatever it is, it's not memory. Something may be causing a lot of disk access, which may create a lot of cache entries, but that's a symptom and not the cause; cache doesn't actually hurt you.

I've asked several times now. What exactly is your server serving?

The high load average is odd, especially when there's no obvious reason for it. Keep checking top once in a while, see if anything appears.

it is http server, but currently it is only stand alone server...