find command loops in a sun Solaris 8 cluster

It has happended twice the past 3 months. The find command which is the standard part of unix accounting script "dodisk", which searches directories to find out how much disk space a user has used.

On a particular cluster of 6 servers, several file systems, the find command has twice used all the available cpu and ran for hours instead of minutes.

Searching for this on the Sun website is not as easy as it may appear.

Just wondering if anyone out there has come across this. For a while it works on these servers, but when it happended 2 months ago we stopped it from running again - although it had run for the 2 months prior to that occurance.
After a fresh install the first time it ran after 2 months was a few day ago, it straight away used the cpu and ran for hours - it had to be stopped and again removed.

I suspect it is because these file system are shared, and there are thousands of files in the file systems, and so it just takes forever to run.....

Thanks

Can you post a copy of your script?

It is actually a known Sun Cluster problem where the server that contains the Global File System will use more cpu and therefore more overhead.

We have a problem where we have 2 servers in a sun cluster, 1 uses too much cpu which has the global file system. However it is directly related to the amount of files in the filesystem, and when there are too many it uses too much cpu for searching.

The script is /usr/lib/acct/dodisk

Thanks