Efficient UNIX Memory management for Running MapReduce Jobs.

We are trying to set up a single node cloudera hadoop cluster having 16 GB as RAM on linux machine. We are setting up 5.4.2 version.

Now when we check statistics post the installation and run the top command we find that only 1 -2 GB is available. when we trigger map reduce sample job - no memory is allocated to the job and so job doesnt run.

Can you please let us know what should we do so that more memory is available

Analysis of the top command

cloudera takes 4-5 GB.

mysql 6 GB.(external database to store the metastore) other services 2-3 GB. thus contributing to 13 GB out of 16

can you run free -m ?

PS. "single-node cluster" is a nonsense.

#-> free -m
             total       used       free     shared    buffers     cached
Mem:         15951      15215        736          0        452       7288
-/+ buffers/cache:       7474       8476
Swap:         4027          0       4027

top command output is here

#-> top
top - 02:07:52 up 1 day, 12:05,  2 users,  load average: 0.00, 0.00, 0.00
Tasks: 210 total,   1 running, 209 sleeping,   0 stopped,   0 zombie
Cpu(s): 11.4%us,  2.9%sy,  0.0%ni, 85.8%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Mem:  16334648k total, 15580888k used,   753760k free,   463220k buffers
Swap:  4124664k total,        0k used,  4124664k free,  7462940k cached

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
16435 root      20   0 2618m  54m 4816 S 31.9  0.3  25:35.09 python
 8578 root       0 -20  102m  10m 2020 S  4.3  0.1  20:01.54 perfd
13209 cloudera  20   0 2912m 604m  83m S  4.3  3.8  53:26.85 java
13289 cloudera  20   0 2699m 398m  19m S  3.3  2.5  22:47.03 java
13307 cloudera  20   0 2724m 460m  34m S  2.7  2.9  14:55.97 java
16014 yarn      20   0 1095m 255m  19m S  2.3  1.6  18:11.50 java
19155 hdfs      20   0  969m 259m  19m S  1.7  1.6   8:18.92 java
15695 mapred    20   0  927m 240m  19m S  1.3  1.5   4:51.68 java
16798 oozie     20   0 2687m 379m  22m S  1.3  2.4  22:13.72 java
15793 yarn      20   0  976m 247m  20m S  1.0  1.6   5:31.36 java
19297 hdfs      20   0  898m 213m  19m S  1.0  1.3   3:23.79 java
22860 cloudera  20   0 4885m 1.6g  22m S  1.0 10.0  31:58.05 java
  925 root      20   0 20644 1500 1100 R  0.7  0.0   0:00.07 top
19217 hdfs      20   0  952m 225m  19m S  0.7  1.4   2:49.87 java
 8474 root       0 -20 39240 6496 2492 S  0.3  0.0   7:57.41 scopeux
 8735 root      20   0 1775m  12m 6464 S  0.3  0.1   0:42.67 ovcd
 9442 root      20   0  625m  14m 9300 S  0.3  0.1   0:22.67 coda
13240 cloudera  20   0 2285m 195m  18m S  0.3  1.2   3:12.26 java

cAn you please suggest what can i do to tune this