Problem with creating big files

Hi...
I have a very wired problem with my redhat4 update 4 server...
Every time i create a file bigger then my physical memory the server kills the process\session that creates the file, and in the "messages" file i see this error:
"Oct 21 15:22:22 optidev kernel: Out of Memory: Killed process 14300 (sshd)."
The problem occurs only when I'm creating the file on NFS file-system, on local file-system it's ok...

Any thoughts?

Thanks and sorry for my pour English.
Eliraz.

  1. What program are you using to create the file?

  2. Does the NFS server impose limits on user file space?

Also check if the filesystem exported from the NFS server can handle largefiles. You can do that by checking the filesystem like this:

mount | grep "/path_to_filesystem"

This will show the mount options. If largefiles isn't visible, there's your problem.

  1. basically I'm trying to create the file using sqlplus (it's an oracle data file), but i have tried also "dd" and it happened again...

  2. the NFS server does not have any limits and the file-system is 100GB, also the process killing comes from the OS not from the NFS server...

this is what i get:
nfserver4:/fs_db_optidev on /db/optidev type NFS (rw,retry=1000,addr=10.57.8.189)

but "largefiles" is an option that handle files larger than 2GB... my file is reaching 4 GB (the size of physical memory) and then crashes, so i don't think this is the problem...

need to increase the swap size in the NFS server

Reason: NFS copies files from the client to the RAM to the FS in the server

Now if the RAM is out of space and the swap cannot store any more too, its an usual behavior to kill the process trying to write to it

~Sage

the NFS server is a celerra (EMC)... i don't think it's because of that... this machine is built for NFS serving of large files...