Swap space calculation

Hi every one.

I have a client with a Unixware 7.1.3 system.
4GB Ram
2 x Xeon CPU's
4 x 147GB scsi Raid 5 Config
1 x 147GB Scsi spare drive

Running a database application on top of Unixware

What is the formula you use to determine what the size of the swap file is suppose to be ?.

The "Administration and Installation" guide does not list this information it only says "Size of System = 256MB Size of swap space = 200MB"

On Debian Linux i use the following:
2 x ( Physical Ram ) = ( Swap file size )

For the client's Unixware system the following information was retrieved

# swap -l
path dev swaplo blocks free
/dev/swap 7679,2 0 4192256 4192256
#

swap blocks = 4192256 x 512 = 2,146,435,072 ( 2.1 GB )
free blocks = 4192256 x 512 = 2,146,435,072 ( 2.1 GB )

# swap -s
total: 0 allocated + 1977784 reserved = 1977784 blocks used, 10311584 blocks available
#

used 1977784 x 512 = 101,265,408 ( 101MB )
available 10311584 x 512 = 5,279,531,008 ( 5.2GB )

So my total swap space is 5.3GB.

What i'm trying to establish here is this.

1) Did the guy who installed the system do the correct calculation for the swap file ?.

2) What is the correct formula to follow for calculation swap space for a new system ?.

Thanks in advance
:slight_smile:

There is no fixed determination for a swap/RAM ratio - generally you have a good handle on it. 2x -> 3x the size of RAM is reasonable to start with. What you show with 5+ GB is not out of that range. And your calculation seems to me to be alright. This is my opinion. Do you see very high swap utilization right now? Is there something to indicate a big performance problem?

A lot of documentation simply gives an estimate of what a single app requires with no additional load on the system other than app users.

What I see:
If this is a commercial application with more than 10 concurrent users, your system could be short on disk for a real db application. Example: we have a CIS with .5 terabytes of data and 450 users. It is installed across 20 filesystems each on physically distinct disks.

The point is there was a lot of load balancing needed early on to get things to work well, like keeping swap separate from tablespaces, putting active tablespaces on their own filesystem/disk. You cannot do much of that with 2 physical disks.

Completely agree with jim. There is no magic formula to size swap. It depends on how many applications and processes are running on your system and what their performance requirements are. In fact if you have enough memory installed you may have no need for swap...which is pretty subjective in itself. Adding more swap means you can fit more application pages into virtual memory at the expense of reduced performance though.

The recommendation of 200Mb in your case is for the database application that you are installing on your machine and since you have 5.2Gb installed system-wide that should more than suffice. Starting off I usually set...

Swap Size = Physical Memory

Later on if you add more memory to the machine or load more applications onto it you may extend swap as needed based on best practices or the metrics provided by the system monitor that is installed on the box.

Thanks for the reply guys.....:slight_smile:

Below is how my system is partitioned.

# df -v
Mount Dir Filesystem blocks used avail %used
/ /dev/root 6291456 2804708 3486748 45%
/stand /dev/stand 122880 10688 112192 9%
/proc /proc 0 0 0 0%
/dev/fd /dev/fd 0 0 0 0%
/dev/_tcp /dev/_tcp 0 0 0 0%
/home /dev/dsk/c0b0t0d0s4 3440640 498720 2941920 15%
/system/pr /processorfs 0 0 0 0%
/tmp /tmp 503808 168 503640 1%
/var/tmp /var/tmp 503808 168 503640 1%
/raid /dev/dsk/c0b0t1d0s1 1132646400 329204480 803441920 30%
#

The database is on /raid mount.

The server has got about 130 users connecting to it.

The performance is fine, no problems with that.
We are having problems with backups and not getting the throughput that we should. IBM Had come and tested and replaced hardware but still no fix.
We are using a Ultruim LTO 3 drive and we are getting 6.4MB/s data transfer rate.:frowning:

Sounds like the system is I/O bound. Raid 5 with four disks ~ 3 disks worth of storage.
But it is all on the same logical volume. And swap is on there as well. Maybe a sysadmin with current skills will see this thread, but I personally would concentrate very very hard on iostat data. Can you move swap off onto the scsi drive?

Have you looked at iostat data? ESpecially during high load (== high I/O) times, like during backup

What throughput are you expecting? Besides hardware a lot depends on how your database is physically laid out under the /raid mount point. Is it striped over the 4 x 147GB scsi Raid 5 disks? Are all your connections fibre or copper? Is it SAN or dedicated? Where is the backup software loaded? Is it on the same storage as the database or is it on the internal disk? As you can see more information is needed before the root cause of the performance problem can be isolated.