I would like to set the maximum number or open files per process to be greater than 1024000 (for specific application scalability purpose). I am using RHEL 5.3/Ext4.
%sysctl fs.file-max
fs.file-max = 164766821
I also have added the folloing to /etc/security/limits.conf
soft nofile 4096000
hard nofile 4096000
However I am stuck with
ulimit -n <value>
The max value is limited to 1024000.
How can I increase the maximum?
I think the OP is aware of that, however what kind of application will use more than million file descriptors ? If those are the requirements of a software that will work on a standalone system, you better think of clusters, unless the application's overhead / footprint is very small.
Actually I tried ulimit -n unlimited before and I got the same error as using
ulimit -n <value> when value is greater than 1024000
The application indeed is running on a cluster setting. In the extreme scaling situation the application will need to open a huge number of files concurrently. Will it have scaling issue? This is a part of my intention - to find it out.
I have a feeling that it might need kernel recompilation. Any hint which conf file or .h file needs to be changed?
Could you explain the problem a little more? Maybe some sort of client/server system work better than a cluster. More and smaller file tables instead of one gigantic global file table. Or a method in which fewer files containing more data could be used, are they of fixed size? What kind of data? Altering the kernel may have unexpected consequences, the limit might be that "low" for a reason.
In AIX you can set the hard limit to -1 for an unlimited ulimit in /etc/security/limits. I think in Linux the file is /etc/security/limits.conf. Each user has its own hard limits. There should be examples in the file.