Correct me where I missed the point - the system allocates 'unused' memory which it gives up when needed. You still have lots of free pages. This an m4000:
As far as I know limiting the zfs cache could make sense because of the overhead associated with the dynamic resizing.
Oracle/Sun consultants recommended the following settings in /etc/system on our database servers with 64G of physical memory:
* ZFS tuning
set zfs:zfs_immediate_write_sz=8000
set zfs:metaslab_df_free_pct=4
set zfs:zfs_nocacheflush=1
set zfs:zfs_vdev_max_pending=32
* limit zfs 16G
set zfs:zfs_arc_max=17179869184
Actually no. However, I have created other zones on ZFS pools. These zones run Oracle software that looks for available RAM before starting. If there is no RAM available, the software pukes out an error msg and dies.
I'm just looking for ways to get these zones up and running while managing memory effectively. I'm also new at using ZFS and zones so I'm going through a bit of a learning curve too. :o
I was also looking at "top" before and after instaling the zone and noticed a 6G difference in available memory. I knew that the ZFS ARC memory allocation was dynamic but did not think it would release soo much RAM at once.