Possible to increase swap size for existing UFS-based drive slice?

I like to increase swap size for my current server running solaris 10.
Seems like the system is not using it's full 16G of physical memory.

#swap -l
swapfile             dev  swaplo blocks   free
/dev/dsk/c0t0d0s1   32,1      16 1058288 1058288
# swap -s
total: 4125120k bytes allocated + 178808k reserved = 4303928k used, 595088k available
# format
Searching for disks...done


AVAILABLE DISK SELECTIONS:
       0. c0t0d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
          /pci@780/pci@0/pci@9/scsi@0/sd@0,0
       1. c0t1d0 <HITACHI-H101473SCSUN72G-SA23-68.37GB>
          /pci@780/pci@0/pci@9/scsi@0/sd@1,0
       2. c2t0d0 <SEAGATE-ST314655SSUN146G-0B92-136.73GB>
          /pci@7c0/pci@0/pci@1/pci@0,2/LSILogic,sas@2/sd@0,0
       3. c2t1d0 <SEAGATE-ST314655SSUN146G-0B92-136.73GB>
          /pci@7c0/pci@0/pci@1/pci@0,2/LSILogic,sas@2/sd@1,0
       4. c2t2d0 <SEAGATE-ST314655SSUN146G-0B92-136.73GB>
          /pci@7c0/pci@0/pci@1/pci@0,2/LSILogic,sas@2/sd@2,0
       5. c2t3d0 <SEAGATE-ST314655SSUN146G-0B92-136.73GB>
          /pci@7c0/pci@0/pci@1/pci@0,2/LSILogic,sas@2/sd@3,0
       6. c2t4d0 <SEAGATE-ST314655SSUN146G-0B92-136.73GB>
          /pci@7c0/pci@0/pci@1/pci@0,2/LSILogic,sas@2/sd@4,0
       7. c2t5d0 <SEAGATE-ST314655SSUN146G-0B92-136.73GB>
          /pci@7c0/pci@0/pci@1/pci@0,2/LSILogic,sas@2/sd@5,0
       8. c2t6d0 <SEAGATE-ST314655SSUN146G-0B92-136.73GB>
          /pci@7c0/pci@0/pci@1/pci@0,2/LSILogic,sas@2/sd@6,0
       9. c2t7d0 <SEAGATE-ST314655SSUN146G-0B92-136.73GB>
          /pci@7c0/pci@0/pci@1/pci@0,2/LSILogic,sas@2/sd@7,0
      10. c2t8d0 <SEAGATE-ST314655SSUN146G-0B92-136.73GB>
          /pci@7c0/pci@0/pci@1/pci@0,2/LSILogic,sas@2/sd@8,0
      11. c2t9d0 <SEAGATE-ST314655SSUN146G-0B92-136.73GB>
          /pci@7c0/pci@0/pci@1/pci@0,2/LSILogic,sas@2/sd@9,0
      12. c2t10d0 <SEAGATE-ST314655SSUN146G-0B92-136.73GB>
          /pci@7c0/pci@0/pci@1/pci@0,2/LSILogic,sas@2/sd@a,0
Specify disk (enter its number): 0
selecting c0t0d0
[disk formatted]
Warning: Current Disk has mounted partitions.
/dev/dsk/c0t0d0s0 is currently mounted on /. Please see umount(1M).
/dev/dsk/c0t0d0s1 is currently used by swap. Please see swap(1M).
/dev/dsk/c0t0d0s7 is currently mounted on /export/home. Please see umount(1M).

partition> print
Current partition table (original):
Total disk cylinders available: 14087 + 2 (reserved cylinders)

Part      Tag    Flag     Cylinders         Size            Blocks
  0       root    wm     104 -  1224        5.44GB    (1121/0/0)   11407296
  1       swap    wu       0 -   103      516.75MB    (104/0/0)     1058304
  2     backup    wm       0 - 14086       68.35GB    (14087/0/0) 143349312
  3 unassigned    wm       0                0         (0/0/0)             0
  4 unassigned    wm       0                0         (0/0/0)             0
  5 unassigned    wm       0                0         (0/0/0)             0
  6 unassigned    wm       0                0         (0/0/0)             0
  7       home    wm    1225 - 14086       62.41GB    (12862/0/0) 130883712

Looks like only 516.75MB allocated for swap?

# zpool status
  pool: adpool03
 state: ONLINE
 scrub: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        adpool03    ONLINE       0     0     0
          c0t1d0    ONLINE       0     0     0

errors: No known data errors

  pool: rz2pool
 state: ONLINE
 scrub: none requested
config:

        NAME         STATE     READ WRITE CKSUM
        rz2pool      ONLINE       0     0     0
          raidz2     ONLINE       0     0     0
            c2t0d0   ONLINE       0     0     0
            c2t1d0   ONLINE       0     0     0
            c2t2d0   ONLINE       0     0     0
            c2t3d0   ONLINE       0     0     0
            c2t4d0   ONLINE       0     0     0
            c2t5d0   ONLINE       0     0     0
            c2t6d0   ONLINE       0     0     0
            c2t7d0   ONLINE       0     0     0
            c2t8d0   ONLINE       0     0     0
            c2t9d0   ONLINE       0     0     0
            c2t10d0  ONLINE       0     0     0

errors: No known data errors

Strange part is.... c2t0d0 is also part of a ZFS??

Can someone help me decode this? I inherited this system.
Is the only way to increase swap is to create another swap file (say in /home)?

Any help much appreciated!

Are you running a 32 bit kernel isainfo -kv ?
Can you post echo ::memstat | mdb -k output ?

What do you mean "also" ? c2t0d0 looks like only to be used by the ZFS pool.

With your layout, yes (but not in /home which is designed for a different purpose and is usually managed by the automounter). You also need a 64 bit kernel.

The system appears to have only 4 GB RAM, check with
prtconf and
prtdiag
Adding 1 or 2 GB swap makes sense here.
The disk c0t0d0 appears to be fully used, by UFS filesystems. So it's some expert effort to deduct another swap slice from it. Adding a swap file is easier.

Thanks guys for replying!!!! Much appreciated!

(in both global and zone)
# isainfo -kv
64-bit sparcv9 kernel modules
# prtconf | grep -i memory
Memory size: 16376 Megabytes

And jlliagre, you're right about c2t0d0 only in ZFS... I read that wrong.
Also... memstat is not installed on the box.....

MadeInGermany, how do you make that it only has 4GB of RAM?

Additional Info: One of the container is running Oracle 8i.... and it always show that it's using about 2200MB of RAM....
I know that system has 16G and I would like oracle to use more of the RAM.

Sorry, my mistake. Please post echo ::memstat | mdb -k output (I missed to write the "echo" command).

How do you measure this usage ?
Are you running the swap command inside a non-global-zone ?
Is there some capping in place ?

Nothing happens when I run

echo ::memstat | mdb -k

Does it take a long time to run?

I ran prstat -Z on the global

ZONEID    NPROC  SWAP   RSS MEMORY      TIME  CPU ZONE
     2      122 2427M 2432M    15%  20:28:31 9.4% zone3
     4       64  309M  306M   1.9%   6:29:03 0.2% zone3c
     0       48  187M  185M   1.1%   7:01:24 0.1% global
     3       62  406M  409M   2.5%   7:18:24 0.0% zone3b
     1       42 1414M 1417M   8.7%   6:09:35 0.0% zone3a

Looks like it's using 2432M of memory now.....

No capping, I believe

# zonecfg -z zone3 info
zonename: zone3
zonepath: /zones2/zone3
brand: native
autoboot: true
bootargs:
pool:
limitpriv:
scheduling-class:
ip-type: shared
net:
        address: 192.168.125.24
        physical: e1000g0
dataset:
        name: rz2pool/database

It should take seconds if you have Solaris 10 update 8 or newer. Might take minutes or even an hour if you have an older Solaris release (see https://blogs.oracle.com/sistare/entry/wicked\_fast_memstat )

So according to prstat, you have 16 GB of memory. Do you feel there is still a problem to solve ?

I'm running mdb now...Guess I have the older updates....it's taking a while.

How do you figure that I have 16GB of memory from the prstat output.

I'm just wondering if the low swap size (516.xx MB) will the zones be able to use the whole installed 16 GB of memory?
And if I were to increase via the use of swap file, I only need to touch the global zone right?

Thanks again!

zone3 is using 2432M which is reported as 15% of the memory.

2432/15*100=16213

The risk is high part of your RAM will be unusable because of virtual memory reservation.
You swap area is likely too small given the RAM size.

Yes, the swap area is to be handled by the global zone.

mdb result came back....

# echo ::memstat | mdb -k

Page Summary                Pages                MB  %Tot
------------     ----------------  ----------------  ----
Kernel                    1221615              9543   58%
Anon                       536894              4194   26%
Exec and libs               25471               198    1%
Page cache                   1259                 9    0%
Free (cachelist)             6015                46    0%
Free (freelist)            300019              2343   14%

Total                     2091273             16338
Physical                  2045044             15976

Not sure how to read it though......

And if I were to add swap file to my system... should I shut down my containers in order to do so?

Also although my global zone is running UFS... it has a large ZFS that all the containers run on... would you recommend that I create the swap file on the ZFS or UFS filesystem?

Thank you!

You have your 16 GB visible and around half of it is used a ZFS cache, which is fine. You still have RAM available so nothing to worry here.

No, you can add and remove swap on a live system.

You might have no choice but to use UFS.

I suspect swap volumes not to be supported on a RAID-Z pool. At least I never saw such a configuration. Same issue with swap files that are likely still unsupported on ZFS.

Thanks jlliagre!

How do you figure that half of it is ZFS cache?
Is it the ZFS ARC cache that I kept reading about?

I also ran kstat and got this

# kstat zfs:0:arcstats:size | grep size | awk '{printf "%2dMB\n",  $2/1024/1024+0.5}'
6587MB

Looks like it's using 6.5GB in ZFS cache....

Is this normal.... or should I be using
zfs_arc_max and zfs_arc_min to limit how much cache ZFS can use...

And having that the Oracle 8i is running off the ZFS volume... what is better? more available memory
or more ZFS cache... (Sorry.. I know this is a bit OT)

Because a kernel doesn't use 9.5 GB so the ZFS ARC is likely using a large part of it.
Current Solaris ::memstat separate the ZFS releated memory from the rest of kernel usage.

Indeed.

There is no point limiting the cache size (outside very specific cases).

Unused memory is wasted memory. More of the memory is used for cache, better the performance.

Well... the whole point of my investigation is to find out if I can coax out more
performance out of my current setup. I like to have the oracle zone (zone3)
use more memory to improve performance....

Also ZFS does tend to slow down dramatically after the system has been up
for a while... to the point that login onto the shell would take some 10
seconds... any access to the disk takes longer to respond..etc.

Is there any way to refresh the ZFS cache?

Thank you jlliagre!

Then you should post echo ::memstat|mdb -k , vmstat 5 5 , zpool iostat -v 5 5 and swap -s output when this problem occur.

No sure what you mean as the ARC cache is always "fresh".
You can clear it by exporting/importing the pool.
Is your ARC size capped ?

Well... it's doing it again....

vi takes around 5 - 10 seconds to save a small file....
login takes around 10 - 15 seconds to get a shell prompt....

Only happens to the container zones but not the global.
Container zones are mounted on the ZFS.

# zpool iostat -v 5 5
               capacity     operations    bandwidth
pool         used  avail   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
adpool02     329G  79.1G     50    202   290K  1.18M
  raidz1     329G  79.1G     50    202   290K  1.18M
    c0t1d0      -      -     31    120  1.90M   615K
    c0t2d0      -      -     31    121  1.89M   615K
    c0t3d0      -      -     31    121  1.90M   615K
----------  -----  -----  -----  -----  -----  -----

               capacity     operations    bandwidth
pool         used  avail   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
adpool02     329G  79.1G     39      0   376K      0
  raidz1     329G  79.1G     39      0   376K      0
    c0t1d0      -      -     23      0  1.42M      0
    c0t2d0      -      -     26      0  1.58M      0
    c0t3d0      -      -     27      0  1.63M      0
----------  -----  -----  -----  -----  -----  -----

               capacity     operations    bandwidth
pool         used  avail   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
adpool02     329G  79.1G     33      0   227K  25.0K
  raidz1     329G  79.1G     33      0   227K  25.0K
    c0t1d0      -      -     21      0  1.25M  12.5K
    c0t2d0      -      -     20      0  1.23M  12.5K
    c0t3d0      -      -     22      0  1.33M  12.5K
----------  -----  -----  -----  -----  -----  -----

               capacity     operations    bandwidth
pool         used  avail   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
adpool02     329G  79.1G     73    197   369K   863K
  raidz1     329G  79.1G     73    197   369K   863K
    c0t1d0      -      -     37    154  2.26M   439K
    c0t2d0      -      -     33    156  2.08M   440K
    c0t3d0      -      -     36    154  2.26M   440K
----------  -----  -----  -----  -----  -----  -----

               capacity     operations    bandwidth
pool         used  avail   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
adpool02     329G  79.1G     49    524   348K  2.70M
  raidz1     329G  79.1G     49    524   348K  2.70M
    c0t1d0      -      -     31    383  1.83M  1.37M
    c0t2d0      -      -     30    387  1.83M  1.37M
    c0t3d0      -      -     31    376  1.87M  1.37M
----------  -----  -----  -----  -----  -----  -----
# vmstat 5 5
 kthr      memory            page            disk          faults      cpu
 r b w   swap  free  re  mf pi po fr de sr s0 s1 s2 s3   in   sy   cs us sy id
 0 0 0 511520 770472 246 1008 246 1 0 0  0  0 164 0 165 10127 9120 17150 16 6 78
 1 0 0 2298600 489608 59 172 13 0  0  0  0  0 193 0 197 4725 10124 6080 22 5 74
 0 0 0 2308296 495120 61 68  0  0  0  0  0  0 34  0 30 3355 9951 4139 19  5 76
 0 0 0 2323896 504784 189 516 0 0  0  0  0  0 155 0 169 4284 10145 5426 16 5 79
 0 0 0 2274624 470288 486 2342 0 0 0  0  0  0 151 0 158 4427 14669 5866 27 6 67
# swap -s
total: 3213376k bytes allocated + 304064k reserved = 3517440k used, 2244760k available

(this is a different box than what I previously posted...)

---------- Post updated at 01:32 PM ---------- Previous update was at 01:24 PM ----------

As a test... I used vi to write "test" to a new file... takes long time to save.
(This time it took some 25 seconds)
I could however do "echo test > testfile" and it would write instantly.

I think this indicate a problem with the zone... I just don't know what it is.

It is not recommended to use swap file with ZFS. The best practive is to create ZFS volumes and use them as swap.

That's an understatement. It is actually not possible at all to use a swap file with ZFS.

Not in the OP case. Creating a swap volume on the RAID-Z pool wouldn't be possible either. The only option here is to create a swap file on UFS.