What is the limitation in AIX?

Hi All,

i got few questions...

1) What is the maximum number of files that we can save under a single directory in AIX ? (* we have enough storage/disk space)

2) And what is the maximum number of sub - directories in side a directory?
I know that...every directory is a (special) file...so if i get answer for my 1st qn....it will answer 2nd qn too. * correct me if i'm wrong

Any idea is highly appreciated.

Using JFS2, there is no hard limit as far as I know.

There might be some limitations on the number of inodes your filesystem can allocate, although JFS2 can also perform on-demand inode allocation.

From IBM's official documentation:

Theoretically JFS2 filesystems can support files up to 2 PBs in size. In reality however there's a pseudo-hard limit (the OS will warn you if you try to exceed this limit) set to 32 TB with files no larger than 16 TB.

So, if you were given an infinite amount of disk space under JFS2 it would be possible to have an infinite amount of files as long as the sum of their size did not exceed 2 PBs.

This means you still won't be able to store the whole Internet in your system. :wink:

EDIT: And yes, to the eyes of the OS, a directory is still a file.

1 Like

1) lots, but large dirs are slow to process, so nobody goes there. Think of a path name for a complex object, now in place of 30k of them in one dir, look for separations and put slashes in there, and voila, smaller directories.

2) limited only by path length. Welcome to recursion. Lots of JAVA guys go nuts under windows' 255 char limit. UNIX is usually 1024 but I believe you can compile a more generous number into your kernel.

Each directory is an inode, just like a file but marked for directory handling. Think of it as a big dumb list of entry name and inode #, nothing else. Things like pipes and devices are a lot more 'special'.

Lots of O/S have just directory and flat file. Soft and hard links are not always there. Devices live somewhere outside the file tree, and if you want pipe behavior, you have to program.

Thanks for your prompt response.......verdepollo & DGPickett
Appreciate your ideas... :slight_smile:

And i found some ibm link on this

pic.dhe.ibm.com/infocenter/aix/v7r1/index.jsp?topic=%2Fcom.ibm.aix.prftungd%2Fdoc%2Fprftungd%2Fdiffs_jfs_enhanced_jfs.htm

@Verdepollo -

as you said, we don't have limitation on number of i - nodes in JFS2
So' I guess...there is no limitation on number of files in a JFS2 filesystem (directory) .
Correct me if i'm wrong.

There is no standard tool for cleaning up a bloated directory, either -- you must move the keepers to a new dir, trash the old dir and rename the new into the old place. Every reference usually must scan the whole space, usually not a structured scan. One standard check on slow computers is directories too big! It's not a hash map or even a tree.

@DGPickett

thanks for your reply..but.....sorry
i really do not understand..your comment.

What DGPickett means is the following:

A directory is quite similar to a file and the bigger a file gets the longer it takes the system to read it, which is to be expected. Run a "grep" against a file of 10GB and it will take longer than against a file of 1k size.

Let us consider the case where you issue a command

grep regexp /path/to/some/file

What happens? Before "grep" can start its work the operating system has to find out which file to open. So it looks in the directory "/path/to/some" and searches there for the inode of "file". A "directory" now is nothing else than a (quite unsorted) list of file names and inode-numbers. The longer this list is the longer it will take the take the OS to search it and find the inode it is interested in.

Usually you won't notice even this difference because the OS uses otherwise unused parts of the memory to buffer such information. This is part of the "file system cache": the system won't read the directory information from disk, but use the copy it has already stored in memory. As memory is much faster than disk this will speed up things considerably. But as the directory gets bigger and bigger and memory is a limited resource at some point the list might not fit in memory any more additionally hurting the speed with which this list is searched.

Bottom line: even if there are no theoretical limits there is some practical limit to directory sizes. This practical limit is pushed as hardware gets faster and memory keeps getting bigger, disks getting faster, etc.., but it still remains.

To split a large directory there is no "standard tool" like there is "split" for files. Just create new directories and use "mv" to move files from one to the other. A command like

mv /path/to/file /other/path

will physically move a file only of the directories "/path/to" and "/other/path" are not part of the same filesystem. If they are it is simply a matter of removing the directory information from the one list and putting it into the other. It will take the same time regardless of file size, because the file itself is not touched, just "file metadata" - information about files instead of files themselves.

I hope this clears things up.

bakunin

2 Likes

Yes, searching a directory for a file is all too similar to a grep. If the directory gets huge, the evaluation of the directory takes lots of real time. Deleting files just leaves more empty space to be examined over and over. In the old days of SVR3, the cost was so bad it would spit out warnings "Huge Directory"!

Maybe someone will come up with a FS where the dir is a tree or hash container, scales well if you care to go 'flat', can be automatically trimmed for a balanced tree, but does not run slow because of historical size.

1 Like

I would have to do some tests to verify, certainly with extremely large directories, but I seem to recall that jfs2 already "compresses" the directory when appropriate.

1 Like

@ Bakunin
I got you....Thanks much for your explanation! Thank you all for your ideas :slight_smile:

Well, that's the internet age: imagine and if you look, already: JFS (file system) - Wikipedia, the free encyclopedia

Anyone know of other FS not in the JFS family that manage and structure directories?

1 Like

I have not used "JFS" so long, I was afraid to mention the "B-tree" sort of directory entries. - actually - I had forgotten it entirely! Thanks for the reminder!