File Descriptors

Hello all,

A few questions on file descriptors ...

scenario : Sun Ultra 30 with Sun OS 5.5.1 , E250 with Solaris 2.6

In one of my servers, the file descriptor status from the soft limit and hard limits are 64 and 1024 respectively for root user.

Is the soft limit (64) represents the maximum no of file descriptors that can be created by a SINGLE PROCESS of corresponding user ??

If this is true, how to identify the maximum resorce limit of a user ??

I had a look on

/usr/proc/bin/pfiles <pid> and

/proc/<pid>/fd ...

I also checked these statuses in another server which is running on Solaris 2.6.

I selected one oracle application pid, and it showed
Current rlimit : 1024 file descriptors

But when I log into oracle user and tried the limit, it was showing 64 and 1024 as soft and hard limits...

This is confusing me ...

I had another instance of file descriptor mismatch
( that of sysdef and limit ... sysdef shows ffffffff:fffffffd , but the limit shows 64:1024 !??? )

Could you please throw some light on these....

Is there any way to identify the no of active (open) file descriptors for a user and the limits ....
My aim is to get an early notification on file descriptor usage and prevent downtime...

(One of the server is SunOS 5.5.1, so I cant run a lsof.)

Thanks in Advance...

File descriptors use a trivial amount of memory and I don't understand what it is that you're worried about. Each time you remove a file descriptor from your system you get back the space required to store two shorts and one pointer. Are you really that tight on swap space?

Your average fd will also require a file table entry, but even that isn't a real big deal. If I open() a file, and then dup()it 100 times, I will have 101 fd's all pointing at the same file table entry. That would be a little crazy, but lots of processes have stdout and stderr pointing to the same file table entry, so assuming a correspondence between fd's and open files is wrong.

Any process can change its soft limit to any value that does not exceed its hard limit. Any process may lower its hard limit. A root process can raise its hard limit.

We have some 5.5.1 systems and we use lsof on them all the time. But pfiles in included with 5.5.1 and might be a better choice for you anyway.

Hi,

Thanks for the information. I am sorry, my question was not clear enough.

I was not worried the memory usage.

Infact, I presumed that there is a user wide limitation on no of file descriptors.
So if I can get the data related to file descriptors, I can get an early warning on "too many open files ", and set the limits accordignly.

Your message is clear that, there is a process wide limitation, and a process can raise the soft limit as long as it does not cross the hard limit.

But what about the user wide settings.. ?

Thanks...

First, there are no user wide settings on fds. Each process can use the hard-limit of fds.

Too many open files can be a problem with same versions of unix since there is only one file table for everybody. If it fills on HP-UX, open() calls will fail. A command like "sar -v 1 4" can be used to see the table size. But sun dynamically expands the file table. The sar command will make you think it is full. But it will expand as needed.