Fastest way to count big amount of files in sub directory

Hi, what happened is we want to count all the files in a directory and inside this directory got many folders and so take long time to count it. Already run for about few minutes but still not done. The command we use to count is find . -type f | wc -l

Just wondering if there is any other faster command to count total files in these many folders?

try to run this command may will be useful

ls -lR . 

Hi, I tried using ls -lR . but the count doesn't seems right.

Anyway, we managed to find out the way using perl. Thanks.

How you managed using perl ?

Hi,
Why? What for you count files in directory?
Maybe the evil witch told you?
Is it not enough that there are a lot maybe too much?
You can do you commend "find . -type f | wc -l
at background
Maybe use 'df -i" ,it's very speed. I known it not the same.
Maybe today your command "find .-type f | wc -l" try find in mounted by nfs or other network and network is damaged, of cource you see errors, maybe you redirect errors?
you can see output
"find . -type f "
Maybe bufor output is too small.
"find . -type f > new.log"
and next
wc -l new.log
or view new.log

Faster than find, but somewhat probabilistic... then again, all good algorithms are...

ls -RF | sed -e '/^$/d' -e '/.*[/@:=|>]$/d' |wc -l

i dont believe a better solution than the find command is available to achieve your requirement.. or else a super expert can help you..

i will suggest you to, try to understand why does the find takes time ? and solve it.

I agree. The "find" command is much faster than the "ls" command because it does not sort the results.

It would certainly help to know how may files there are and whether the command is appreciably faster when issued a second time.

Some packages for specific filesystems (like vxfs) offer commands which are faster than even "find".

What Opeating System and version do you have and what type of filesystem do you use (please be very specific) ?

Hey guys, I have done some testing on it at our test server. Indeed, the find command is faster than the perl script. Initially, I thought it is the other way until I really tested it.

---------- Post updated at 12:29 PM ---------- Previous update was at 12:09 PM ----------

Here are the test result for 127807 files inside 25 folders.

  • find . -type f | wc -l (1 second)
  • perl script using for loop and recurse (4 seconds)
  • ls -RF | sed -e '/^$/d' -e '/.*[/@:=|>]$/d' |wc -l (10 seconds)

Thank you for posting those results