Find command takes too long to complete

Hi,

Below is my find command

find /opt/app/websphere -name myfolder -perm -600 | wc -l

At time it even takes 20 mins to complete.

my OS is : SunOS mypc 5.10 Generic_150400-09 sun4v sparc SUNW,T5440

Hi,

I'm assuming that as it completes it returns a number - what is it?

Regards

Dave

Yes, its check if the folder meets the permission provide or not. All i need is to check the folder permission, so can we tweak my query to give quicker results as the current one takes a long long time.

If you know where the directory is to the point that its permissions are important, why are you running a "find" command?

And if the directory permissions keep changing, you need to fix what's changing the permissions and not paper over the problem with a hack.

There is likely no way to significantly speed this up.
For analysis and perhaps a little improvement, please give the following information:

df /opt/app/websphere
zonename
/usr/sbin/prtconf | head
grep autoup /etc/system

Several sets of output from

iostat -sndxz 2

while the find is running would be good, too.

Output of iostat

You may greatly increase speed with:

find /opt/app/websphere -type d -name myfolder -perm -600 | wc -l

Some points:

If "myfolder" is a directory try limiting the search with -type d

I suspect the websphere directory has loads of files and subdirectories. Some filesystems like ufs lose performance as the number of files in a directory grows in size.

The only other possibility I can think of is your inode cache is horribly small.

echo ufs_ninode/D |mdb -k

run as root in the global zone will show what the ufs inode cache is set to.
I cannot recommend a good value offhand, but if it less than 129797 which is the default with 2048 set as maxusers, then it needs help.
For inode hit rate see if the sar -g column %ufs_ipf shows several non-zero entries in a 2 hour period. If so, bump ufs_ninode by 50%.

# sar -g

SunOS sun-m4k-02 5.10 Generic_144488-14 sun4u    11/10/2014

00:00:01  pgout/s ppgout/s pgfree/s pgscan/s %ugs_ipf

Of all of these I vote for the "too many files" in a directory problem.

Is that with the find actually running? Not much IO going on...

OK, so what does a few sets of output from "mpstat 1" show?

And if you have root, what does "echo ::memstat | mdb -k" show?

---------- Post updated at 05:33 PM ---------- Previous update was at 05:27 PM ----------

FWIW, I don't think using the "-type d" argument to find is going to help at all as "find" is going to have to do a stat() call on every entry in the directory tree anyway. That's just filtering the output.

I was thinking the problem was caused by having to wade through all the stat() calls on every file in the directory tree, with disk IOs being the dominant performance problem. But the iostat output doesn't seem to show that.

A simple

find /opt/app/websphere | wc -l

would be informative.

Me too, but i'd like to offer another possibility: a filesystem (or several FSes) mounted with concurrent I/O. This would bypass OS caching completely and while it speeds database operations with concurrent writer processes it reduces random (non-concurrent) I/O to awfully slow. Check the mount options for the FSes involved to find out if this is the case.

I hope this helps.

bakunin