You're sorting it. You're not going to see any output until it's 100% finished. Try leaving out the sort.
I might also try this:
for DIR in /*
do
[ "$DIR" == "/proc" ] && continue # Not a real folder
[ "$DIR" == "/sys" ] && continue # Not a real folder
echo "Checking $DIR"
du -hs "$DIR"
done
...that way, you can at least tell what directory it's freezing on and avoid bothering with the system pseudo-folders.
If it's freezing on a particular folder, try starting inside that folder and checking the size of its sub-contents. Trawling an NFS mounted folder could be quite slow indeed. Is there any way you could get onto the server they're hosted by directly instead?
Having nfs mounts directly off the root e.g., /nfsdirectory is a no-no, big time. In your case, du does a stat on every file under /, including files mounted on remote systems.
The remote connections are nowhere near as fast or reliable as the locally mount disks.
df and du can hang for hours due to nfs slowness, remote server timeouts, and so on. This can also break the pwd command.
I have noticed on NFS mountpoints this situation :
Make a dir on NFS with couple of files in them.
Copy a big tar file or alike (couple of GB, so it takes time) to that directory.
Do a ls -lrt with truss/truss (do same with du to determine where it got stuck)
I noticed that ls command is sleeping on lstat call (get file status).
But if you do a while true do ls -lrt ... it will be slow first time, and fast every other loop iterations.
From command line during file copy, every ls executed will exhibit the same symptoms.
Can any of you experts explain, is there some kind of issue in NFS design in these kinds of situation with total size in bytes changing and lstat call on NFS files ?