0653-340 There is not enough memory available now

Hi, I have and IB RS/6000 machine running 32-bit AIX 4.3.3. We have a directory that is NFS mounted to this machine. Whenever I run the "ls" command in this directory, I get the following error: 0653-340 There is not enough memory available now

There was originally 512 MB of memory in this system and I doubled it to 1024 MB of memory but I still get the same error.

I Googled this error message and saw that another user got this same error message when running the "ls" command in an NFS mounted directory on an AIX machine. The user was only told that this was not a memory problem.

I am hoping someone can help me with the following questions:
1) What is causing this issue?
2) How can I resolve this issue?
One thing to note is that I want to solve the overlying issue and not a workaround for using find instead of ls. The "ls" issue is just a symptom of the problem.

Any help would be greatly appreciated. Thanks.

What is so special about that NFS mount then?
Because although I can remember having heard of something of the sort, I am quite sure I could not reproduce the case, what happend when you type ls on that FS, does it freese ?
I doubt it has to do with RAM... I have systems running with 512MB and run happily...

Hi VBE,

Thanks for your reply.

If I type "ls" in any directory other than the NFS mounted directory, it will give me the results immediately. The NFS mounted directory has many files and subdirectories in it. Most of the other directories on this machine have very little in them. In fact, if I go into a subdirectory of the NFS mounted directory, I can run "ls" and it will quickly give me the results.

Some more info that further complicates this issue:
This issue was first brought to me by a programmer because everytime he ran the "make" command on this mounted directory and even its subdirectories, the cursor would just hang. Unlike the "ls" command, the problem did not go away when he tried to run make from a subdirectory of the NFS mounted directory. He would eventually have to use "ctrl + C" to stop the "make" command.

I am wondering if this issue is being caused by the number of files and folders in the mounted directory.

Im tempted to say yes... and no...
NFS mounted FS is for your system a Remote FS your system is not responsible for, so when it comes to write, cache etc your system relies on the remote server...
Lets imagine your AIX beeing paranoid ( like they used to in 4.3 ) and you, found a way of letting your system cache the way it would for local FS, since its network your system would not be aware of what is going on the remote side and so can your system guarantee data integrity with its cache?
No...
So to improve performance of remote file access there is a mechanism of read-ahead and write-ahead provided by the biods...
That said you can try troubleshooting with

netstat -s
netstat -p <protocol>
netstat -i
netstat -r
nfsstat

for a start to see if all is OK or needs a bit of tuning...

I would also go at the NFS mountpoint and use find to look if there arent any ls and make files there that are executables.. ( I remembered a spoof with ls... )

How many files is many? Because yes, even on local FS a big amount of files will give ls a headache..

Q: How much paging space do you have now ( was it changed after adding more RAM?) and its usage?

 lsps -a

and though I doubt, but never know...

lssrc -s biod

Hi vbe,

I ran both commands.

Below are the results for lsps -a:

 Page Space:  hd6
 Physical Volume:  hdisk0
 volume group:  rootvg
 size:  1024 MB
 % used:  1
 Active:  yes
 Auto:  yes
 Type:  lv

Below are the results for lssrc -s biod:

 Subsystem:  biod
 Group:  nfs
 PID:  8262
 Status:  active

Thanks.

---------- Post updated at 09:33 AM ---------- Previous update was at 09:05 AM ----------

I also ran the following command to get a count of the number of files in the problematic directory:

find /directoryA/* -print | wc -l

It returned the following error message:

/usr/bin/ksh:  0403-029  There is not enough memory available now.

Something else comes to my mind:
http://www-01.ibm.com/support/knowledgecenter/SSPREK\_6.1.1/com.ibm.itame.doc\_6.1.1/am611_perftune152.htm%23wq169
Have a look! (Increasing size of Data_segment...)

Hi vbe,

I tried this and the problem still remains.

---------- Post updated at 10:37 AM ---------- Previous update was at 10:12 AM ----------

For me a big amount of files can lead to that message (not only in NFS...), the question is what quantity for AIX4.3.3...

Hello again vbe,

I had few questions I was hoping you might be able to help me with:

1) Below are the 3 instructions for increasing the data segments.

export LDR_CNTRL=MAXDATA=0x10000000 start_process unset LDR_CNTRL

Am I correct that the second instructions "start_process" is not a command to run but instead is actually telling you to work on whatever it is you need the size of the data segments increased for?

2) I ran a test and instead of nfs mounting all of /directoryA to the AIX 4.3.3 machine, I just nfs mounted a subdirectory /directoryA/subdirectory1 to the AIX 4.3.3 machine. I was able to go into the mounted subdirectory1 and could successfully run "ls" and "make". This subdirectory only had a few files in it. This makes me again think that the number of files is the cause of the problem. Are there any other values (environmental variables, etc...) that I might be able to change to allow for "ls" and more importantly "make" to successfully run in a directory with many files in it?

Thanks again for your continued assistance

I've hit this with AIX 5.1 (32-bit) as a client of RHEL 6.3 (64-bit)

It appears that you can play around as much as you like on the 32-bit side, but if the 64-bit server writes a new file when you have more than about 40 files in a directory, then the 32-bit side fails from then on until the number of files is reduced (by the 64-bit side, obviously) and then the client will work happily again.

It was suggested to us that the directory 'file' changes after a certain number of entries when edited on the 64-bit side and that the 32-bit side doesn't understand and goes off in a mad loop until eventually it exhausts memory (set by ulimit) and fails, returning control to the command line.

Our only way around this whilst we still have the need is to split up the files into sub directories such that any one of them has less than the limit of about 40 (for files, count any item, links, directories, pipe files etc.)

It's not great, but that's what we had to do. An alternate was to mount the NFS share the other way round, but that relies on disk space on the other side being available and changing any other clients to pick up the replacement 32-bit server. You could have a 32-bit server just to server NFS to all the others, but they are largely unsupported now.

Sorry it's not great news,
Robin
Liverpool/Blackburn
UK