Rcap is increasing load average. Why ?

Hi,
I have 7 non global zones running on x86 Solaris server. Due to some reason, I see load average is very high on it. And upon further checking, found that rcap is consuming very high consistently. I restarted rcapd through svcadm, but still no luck. Can I just disable rcapadm and leave it like this ? I won't be adjusting memory/CPU online very soon.

And even prstat shouldn't take 4.9% of memory.

# uptime
 10:15pm  up 113 day(s), 17:30,  2 users,  load average: 29.98, 24.44, 19.24
#
# prstat -a
   PID USERNAME  SIZE   RSS STATE  PRI NICE      TIME  CPU PROCESS/NLWP
 18898 daemon     32M   27M run     59    0   0:01:17  49% rcapd/1
     5 root        0K    0K sleep   99  -20   3:55:01 5.0% zpool-rpool/136
 18964 root     4284K 3696K cpu0    59    0   0:00:00 4.9% prstat/1
 18409 106      1274M  449M run      1    0   0:00:47 1.5% oracle/11
  9503 106      1246M  291M run     59    0   0:00:33 0.6% oracle/11
  9554 106      1242M  335M sleep   16    0   0:00:46 0.3% oracle/1
 18020 40004     234M  195M run     59    0   0:00:07 0.3% timestensubd/15
  9949 40004     234M  217M run     59    0   0:01:56 0.3% timestensubd/15
 18023 40004     234M  195M run     45    0   0:00:07 0.3% timestensubd/15
 18010 40004     194M  156M run     59    0   0:00:08 0.3% timestensubd/15
  9950 40004     450M  433M run     59    0   0:01:41 0.3% timestensubd/15
 18011 40004     234M  195M run     59    0   0:00:07 0.3% timestensubd/15
 18028 40004     194M  155M run     59    0   0:00:07 0.3% timestensubd/15
 18013 40004     234M  195M run     59    0   0:00:07 0.3% timestensubd/15
 18024 40004     194M  155M run     59    0   0:00:07 0.3% timestensubd/15
 18012 40004     194M  155M run     59    0   0:00:07 0.3% timestensubd/15
 18963 106      1237M   43M sleep   60    0   0:00:00 0.3% oracle/1
 18951 106      1237M   95M run     53    0   0:00:00 0.2% oracle/1
 18953 106      1237M   89M run     55    0   0:00:00 0.2% oracle/1
  9513 106      1243M  323M sleep   49    0   0:00:48 0.2% oracle/11
  9515 106      1238M  165M sleep   16    0   0:01:08 0.2% oracle/1
  9499 106      1239M   57M run     55    0   0:01:40 0.2% oracle/1
  9507 106      1240M  177M run     59    0   0:00:51 0.2% oracle/11
  9631 106      1239M   68M run     59    0   0:01:09 0.1% oracle/1
  9485 106      1239M   73M sleep   60    0   0:00:38 0.1% oracle/1
  9505 106      1250M   61M sleep   58    0   0:00:24 0.1% oracle/11
 18387 106      1237M   50M sleep   59    0   0:00:07 0.1% oracle/1
 10100 106      2052M  389M sleep   59    0   0:00:01 0.1% oracle/43
 18363 106      2045M   79M sleep   59    0   0:00:00 0.1% oracle/1
 18384 106      1237M   46M sleep   59    0   0:00:02 0.1% oracle/1
  9645 106      1243M  330M sleep   57    0   0:00:30 0.1% oracle/11
  9495 106      1238M  179M sleep   44    0   0:00:27 0.1% oracle/1
  9603 106        63M 3456K sleep   59    0   0:00:06 0.0% tnslsnr/3
 NPROC USERNAME  SWAP   RSS MEMORY      TIME  CPU
    45 daemon     74M   54M   0.7%   0:01:18  36%
    84 106      4729M 4261M    53%   0:24:32 5.2%
   180 root      323M  187M   2.3%   6:09:27 3.2%
    19 40004    2256M 2068M    26%   0:04:42 2.8%
     1 xgemadm    48M   26M   0.3%   0:00:11 0.0%
     4 charles   3336K 6880K   0.1%   0:00:00 0.0%
     6 16681      10M 9872K   0.1%   0:00:00 0.0%
     5 binde 3960K   10M   0.1%   0:00:00 0.0%
     3 ggpuc    1408K 3952K   0.0%   0:00:00 0.0%
     2 zhusc     1308K 3860K   0.0%   0:00:00 0.0%
     8 smmsp      12M   13M   0.2%   0:00:06 0.0%

Total: 356 processes, 2052 lwps, load averages: 30.06, 25.34, 19.87
# rcapadm
                                      state: enabled
           memory cap enforcement threshold: 0%
                    process scan rate (sec): 15
                 reconfiguration rate (sec): 60
                          report rate (sec): 5
                    RSS sampling rate (sec): 5

# 

Thanks

This is a long shot from the offline storage in my brain, but I recall having a similar issue about 15 years ago and it was due to a memory DIMM failure. Maybe check for some hardware failures on the system.

prtdiag shows all good. But I will take downtime of server and run max diagnostics. For now, I just disabled rcap service.