File size limitation in Linux

Hi friends,
I tried to take a backup of my PC using tar command. But it ended with an error

tar: /home/backup/back.tar.gz: Cannot write: No space left on device
tar: Error is not recoverable: exiting now

But i checked the disk space and there is enough space is available.

]# df
Filesystem           1K-blocks      Used Available Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
                     276535241 155700455 106783868  60% /
/dev/mapper/VolGroup00-LogVol01
                     199146641 122482178  66545705  65% /home
/dev/sda1               118523     11323    101178  11% /boot
tmpfs                   916268         0    916268   0% /dev/shm

And i also checked ulimit output of file system

 ulimit -a
core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) 102400000
pending signals                 (-i) 14288
max locked memory       (kbytes, -l) 32
max memory size         (kbytes, -m) unlimited
open files                      (-n) 1024
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 10240
cpu time               (seconds, -t) unlimited
max user processes              (-u) 14288
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited

---------- Post updated at 07:02 PM ---------- Previous update was at 07:01 PM ----------

Any one please help me to overcome this issue.

Thanks

Siva

So how big is the file you are trying to write? ulimit shows a 5 GB file size limit .

I tried to create 20 GB file

Not 100m * 512 = 50g ?
:confused:

Yes I misread. Time for another year. That should be 50 GB of course..

Is it possible that there is an hung process that has a hold on a file that was deleted? If so the space won't be usable until there are no processes pointing to deleted files.

If you have lsof, you might be able to find a file that is still open for a process that is trying to exit.

But wouldn't the df show it?
A 60 GB are free!
I would rather go for an fsck...
Another idea: disk quota. Check with

quota -v

Please tell us the exact command you are using.

Could be fun if you're trying to tar up the tar file you're creating...

Consider: sparse files.

A sparse file that shows 10MB used space when copied or tar-ed may occupy exponentially more file space in the new file or tar file.... Those things are the bane of backups.

I am not saying this applies here but it should be considered when the backup of nnGB will not fit in a backup file of nnGB + a tiny amount.

Sparse file have "holes", this has nice diagrams:

Sparse file - Wikipedia, the free encyclopedia

Also note that you can mount file systems in such a way as to "obscure" an underlying group of files. Ex:
a directory tree with 2GB total in the whole thing :/path/to/confusion
If you mount a file system on /path/to/confusion on the confusion directory, all of the files are there but they no longer are visible to some tools. Results of du versus df is one of them.

This may produce the same weird results being discussed. Again, you may want to consider it. Look in fstab for confirmation.

Hi friends,

I tried that command ..
But it didn't show any output.
Will you please tell me how to solve this................

A good start would be to tell us what command you run, as already asked.