I had been experimenting recently with swappiness of my disk on which a home-grown database application written in C runs. The application runs for several hours and generates a couple of gigabyes of data to file and another couple of gigabytes to database. I tried with several settings from 1 up to 60 (the default):
sudo sysctl vm.swappiness=1
sudo sysctl -p
I was hoping to observe that I had left some performance on the table, but no such thing. Upon closer inspection of the numbers I found out that the variation was due to the amount of available disk space: it matters if your disk is 90% full as opposed to 60% full. Any benefit from swappiness was drowned out.
Does anybody have experience with increased performance in relation to a manual change in swappiness?
The more the disk fills the harder it becomes to find free data blocks for growing files.
vm.swappiness is related to application memory rather than data cache.
When RAM for an active process is short, then it looks for 1. swapping out some RAM of an inactive process, or 2. it drops some data cache. High vm.swappiness prefers 1. over 2.
Changes to swappiness might yield performance gains only if your system is frequently under memory pressure and forced to swap. Given your situation, focusing on disk utilization and ensuring ample free space on the disk where your application writes data might give you more consistent performance improvements than tweaking swappiness alone.
Not to sound overly direct, but I would upgrade to larger disks versus trying to tweak params which are just small tweaks (noise) compared to the gains you will get when you have a lot of free disk space to swap when required.
Or, you can simply try adding another disk (or switching to another disk) for swap, etc.
It does not make a lot of sense, to me at least, to try to optimize swap on a disk which has so little space to swap, especially since disk space is relatively cheap (much cheaper than your time).