Du -sh command taking time to calculate the big size files

Hi ,

My linux server is taking more time to calculate big size from long time.

  • i am accessing server through ssh
  • commands
# - du -sh *
#du -sh * | sort -n | grep G

Please guide me for fast way to find big size directories under to / partition

Thanks

You are limited by the speed of your disk here. du must read the inode of each and every file to generate a summary.

I'm guessing you have an awful lot of files.

Do you have more than one partition? I'd suggest searching inside a partition instead of the entire disk in general.

only two partitions are there / and /boot other directories mounted by NFS.

Yes, / having many files.. I am waiting since 3 hours, no output yet . :frowning:

You're sorting it. You're not going to see any output until it's 100% finished. Try leaving out the sort.

I might also try this:

for DIR in /*
do
        [ "$DIR" == "/proc" ] && continue # Not a real folder
        [ "$DIR" == "/sys" ] && continue # Not a real folder

        echo "Checking $DIR"
        du -hs "$DIR"
done

...that way, you can at least tell what directory it's freezing on and avoid bothering with the system pseudo-folders.

If it's freezing on a particular folder, try starting inside that folder and checking the size of its sub-contents. Trawling an NFS mounted folder could be quite slow indeed. Is there any way you could get onto the server they're hosted by directly instead?

Having nfs mounts directly off the root e.g., /nfsdirectory is a no-no, big time. In your case, du does a stat on every file under /, including files mounted on remote systems.

The remote connections are nowhere near as fast or reliable as the locally mount disks.
df and du can hang for hours due to nfs slowness, remote server timeouts, and so on. This can also break the pwd command.

nfs is almost guaranteed to be your problem.

I have noticed on NFS mountpoints this situation :

Make a dir on NFS with couple of files in them.
Copy a big tar file or alike (couple of GB, so it takes time) to that directory.

Do a ls -lrt with truss/truss (do same with du to determine where it got stuck)
I noticed that ls command is sleeping on lstat call (get file status).

But if you do a while true do ls -lrt ... it will be slow first time, and fast every other loop iterations.
From command line during file copy, every ls executed will exhibit the same symptoms.

Can any of you experts explain, is there some kind of issue in NFS design in these kinds of situation with total size in bytes changing and lstat call on NFS files ?

Hey Jim,

You are right but i cant remove nfs share..:slight_smile: is there any other way to see huge files under / directory ?

---------- Post updated at 11:30 PM ---------- Previous update was at 11:17 PM ----------

Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/vg00-lvol2
                      512G  485G     0 100% /

This is the local one and i want to see large from / directory..

You don't want to check the NFS? Oh, that's much easier.

du -hs /* -x

The -x prevents it from looking outside the partition it starts in.

Sounds like caching behavior. Once it has it, it keeps it a while for faster recall.