Hi, I have and IB RS/6000 machine running 32-bit AIX 4.3.3. We have a directory that is NFS mounted to this machine. Whenever I run the "ls" command in this directory, I get the following error: 0653-340 There is not enough memory available now
There was originally 512 MB of memory in this system and I doubled it to 1024 MB of memory but I still get the same error.
I Googled this error message and saw that another user got this same error message when running the "ls" command in an NFS mounted directory on an AIX machine. The user was only told that this was not a memory problem.
I am hoping someone can help me with the following questions:
1) What is causing this issue?
2) How can I resolve this issue?
One thing to note is that I want to solve the overlying issue and not a workaround for using find instead of ls. The "ls" issue is just a symptom of the problem.
What is so special about that NFS mount then?
Because although I can remember having heard of something of the sort, I am quite sure I could not reproduce the case, what happend when you type ls on that FS, does it freese ?
I doubt it has to do with RAM... I have systems running with 512MB and run happily...
If I type "ls" in any directory other than the NFS mounted directory, it will give me the results immediately. The NFS mounted directory has many files and subdirectories in it. Most of the other directories on this machine have very little in them. In fact, if I go into a subdirectory of the NFS mounted directory, I can run "ls" and it will quickly give me the results.
Some more info that further complicates this issue:
This issue was first brought to me by a programmer because everytime he ran the "make" command on this mounted directory and even its subdirectories, the cursor would just hang. Unlike the "ls" command, the problem did not go away when he tried to run make from a subdirectory of the NFS mounted directory. He would eventually have to use "ctrl + C" to stop the "make" command.
I am wondering if this issue is being caused by the number of files and folders in the mounted directory.
Im tempted to say yes... and no...
NFS mounted FS is for your system a Remote FS your system is not responsible for, so when it comes to write, cache etc your system relies on the remote server...
Lets imagine your AIX beeing paranoid ( like they used to in 4.3 ) and you, found a way of letting your system cache the way it would for local FS, since its network your system would not be aware of what is going on the remote side and so can your system guarantee data integrity with its cache?
No...
So to improve performance of remote file access there is a mechanism of read-ahead and write-ahead provided by the biods...
That said you can try troubleshooting with
I would also go at the NFS mountpoint and use find to look if there arent any ls and make files there that are executables.. ( I remembered a spoof with ls... )
How many files is many? Because yes, even on local FS a big amount of files will give ls a headache..
Am I correct that the second instructions "start_process" is not a command to run but instead is actually telling you to work on whatever it is you need the size of the data segments increased for?
2) I ran a test and instead of nfs mounting all of /directoryA to the AIX 4.3.3 machine, I just nfs mounted a subdirectory /directoryA/subdirectory1 to the AIX 4.3.3 machine. I was able to go into the mounted subdirectory1 and could successfully run "ls" and "make". This subdirectory only had a few files in it. This makes me again think that the number of files is the cause of the problem. Are there any other values (environmental variables, etc...) that I might be able to change to allow for "ls" and more importantly "make" to successfully run in a directory with many files in it?
I've hit this with AIX 5.1 (32-bit) as a client of RHEL 6.3 (64-bit)
It appears that you can play around as much as you like on the 32-bit side, but if the 64-bit server writes a new file when you have more than about 40 files in a directory, then the 32-bit side fails from then on until the number of files is reduced (by the 64-bit side, obviously) and then the client will work happily again.
It was suggested to us that the directory 'file' changes after a certain number of entries when edited on the 64-bit side and that the 32-bit side doesn't understand and goes off in a mad loop until eventually it exhausts memory (set by ulimit) and fails, returning control to the command line.
Our only way around this whilst we still have the need is to split up the files into sub directories such that any one of them has less than the limit of about 40 (for files, count any item, links, directories, pipe files etc.)
It's not great, but that's what we had to do. An alternate was to mount the NFS share the other way round, but that relies on disk space on the other side being available and changing any other clients to pick up the replacement 32-bit server. You could have a 32-bit server just to server NFS to all the others, but they are largely unsupported now.
Sorry it's not great news,
Robin
Liverpool/Blackburn
UK