Max Open File Limit

Ubuntu users,

I am configuring an Ubuntu 14.04 server as a load injector.

I have appended the hard and soft limits to /etc/security/limits.conf for any user (apart from root):

  •     hard   nofile      65536
    
  •     soft    nofile      65536
    

I am seeing the figure 65536 in numerous resources, but I am not sure why?

I am guessing that the system wide-limit should be larger than the user limit, so I have set this to �fs.file-max = 75000' in /etc/sysctl.conf. Is this correct?

Do I also need to set pam_limits on Debian based distros and where does this fit in?

And what limits does �ulimit -n' actually retrieve?

Many Thanks

Aidy

The only rule is: soft limit cannot be higher than hard limit
ulimit -n == ulimit -Sn == soft limit
ulimit -Hn == hard limit
ulimit -Ha == all hard limits
These are per-process limits.
I would leave fs.file-max to the default. It is usually much higher than the per-process limits. (The actual number is calculated by the available RAM.)

Hi MadeInGermany,

Thanks for your reply.

The kernel seems to set the soft and hard limits at 1024 and 4096 respectively.
$ ulimit -Sn
1024
$ ulimit -Hn
4096

But with a machine of 15GB RAM the system wide max limit is:
$ cat /proc/sys/fs/file-max
1529806

With this figure what would you suggest could be a sensible upper hard limit for a process?

Many Thanks

Aidy

The 4096 (default) makes sense in most cases.
A few server applications need more. But a few other applications crash or misbehave if they see very high limits.
Say Oracle would need 32768; then I would set it only for the particular functional user.

* hard nofile 4096
* soft nofile 1024
oracle hard nofile 32768
oracle soft nofile 32768

No need to change system wide, only per user.
Since, of course, you are not running your applications as root are you ?

As for the value required, this is application dependent.
What does that 'load injector' do?
What does the programmer/vendor of that load injector say about kernel parameters required for his software ?

Remember,max open files doesn't apply only to files but rather to file descriptors.
This covers a lot of possible functionality one application can do (sockets, pipes, files).

MadeInGermany - can you tell me where have you noticed failures with large values, if you can share, of course.

I have seen some java/eclipse programs wildly spawning new threads, until they reached the nofile limit or the nproc limit (if set). With the default nofile limits 1024/4096 the programs were running okay with about 300 threads.
Being an administrator I do not bother about what the eclipse programs are for.