Directory size larger than file system size?

Hi,

We currently have an Oracle database running and it is creating lots of processes in the /proc directory that are 1000M in size. The size of the /proc directory is now reading 26T. How can this be if the root file system is only 13GB?

I have seen this before we an Oracle temp file was 64GB in size on a file system that was 20GB in size, and the file system reported 7GB free at the time?

Can someone please shed some light on this.

Thanks,

Sparcman

read here:
procfs - Wikipedia, the free encyclopedia

Thanks for your reply DukeNuke2. If the proc file system is dynamically generated, will it still effect the size of the / file system? My / file system now reports 100% full. Do I need to reboot in order to clear down the /proc file system?

Also this doesn't explain why the file system with the Oracle Temp file read 64GB and the 20GB file system reported 7GB free? Any ideas?

Thanks,

Ger.

The /proc filesystem size is not related whatsoever with the / file system size.

Then focus of files being on that filesystem, not other ones.

That is unlikely to have any effect on / being full.

No. If you check the output of mount, you'll see that /proc is treated as a separate mount point, and as such does not add to the usage of the root filesystem.

Also, the files in /proc only represent current processes, so the big files should vanish as soon as the associated process ends.

Could be that that was a sparse file.
Example that creates a 100M file on a 10M filesystem(uses Linux Loopback device)

# pwd
/tmp/sparse_test
# dd if=/dev/zero of=example.img bs=1024k count=10
10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 0.0421919 s, 249 MB/s
# mkfs -t ext2 example.img
[...]
# mkdir example
# mount example.img example -oloop
# mount
[...]
/tmp/sparse_test/example.img on /tmp/sparse_test/example type ext2 (rw,loop=/dev/loop0)
# df -h
Filesystem            Size  Used Avail Use% Mounted on
[...]
/tmp/sparse_test/example.img
                      9.7M   92K  9.1M   1% /tmp/sparse_test/example
# cd example/
# dd if=/dev/zero of=sparse_file bs=1 count=0 seek=100M
0+0 records in
0+0 records out
0 bytes (0 B) copied, 1.7319e-05 s, 0.0 kB/s
# ll -h
total 12K
drwx------ 2 root root  12K Dec  7 14:09 lost+found
-rw-r--r-- 1 root root 100M Dec  7 14:10 sparse_file
# df -h
Filesystem            Size  Used Avail Use% Mounted on
[...]
/tmp/sparse_test/example.img
                      9.7M   92K  9.1M   1% /tmp/sparse_test/example

Here is the equivalent example (a 100 MB sparse file on a <10 MB filesystem) but using Solaris and ZFS instead of Linux loopback fs.

# zfs create -ps -V 10m rpool/volumes/vol1
# mkdir /tmp/sparse_test
# newfs /dev/zvol/dsk/rpool/volumes/vol1
newfs: construct a new file system /dev/zvol/rdsk/rpool/volumes/vol1: (y/n)? y
Warning: 4130 sector(s) in last cylinder unallocated
/dev/zvol/rdsk/rpool/volumes/vol1:    20446 sectors in 4 cylinders of 48 tracks, 128 sectors
    10.0MB in 1 cyl groups (14 c/g, 42.00MB/g, 20160 i/g)
super-block backups (for fsck -F ufs -o b=#) at:
 32,
# mount /dev/zvol/dsk/rpool/volumes/vol1 /tmp/sparse_test
# mount -p | grep vol1
/dev/zvol/dsk/rpool/volumes/vol1 - /tmp/sparse_test ufs - no rw,intr,largefiles,logging,xattr,onerror=panic
# df -h /tmp/sparse_test
Filesystem             size   used  avail capacity  Mounted on
/dev/zvol/dsk/rpool/volumes/vol1
                       7.5M   1.0M   5.7M    16%    /tmp/sparse_test
# cd /tmp/sparse_test
# dd if=/dev/zero of=sparse_file bs=1 count=1 seek=104857599
1+0 records in
1+0 records out
# ls -lh
total 64
drwx------   2 root     root        8.0K Dec  8 13:05 lost+found
-rw-r--r--   1 root     root        100M Dec  8 13:05 sparse_file
+ df -h /tmp/sparse_test
Filesystem             size   used  avail capacity  Mounted on
/dev/zvol/dsk/rpool/volumes/vol1
                       7.5M   1.0M   5.7M    16%    /tmp/sparse_test
# zfs list 
NAME                                     USED  AVAIL  REFER  MOUNTPOINT
...
rpool/volumes                           2,60M  1,31G    19K  /rpool/volumes
rpool/volumes/vol1                      2,58M  1,31G  2,58M  -
# umount -f /tmp/sparse_test
# zfs destroy rpool/volumes/vol1

Note also that Linux loopback filesystem equivalent is Solaris Loopback file driver (lofi) which would also be usable instead of a ZFS volume as I did.

Thanks for your help guys. It's working fine now.