Advice regarding filesystems handling large number of files

Hi All,

I have a CentOS operating system installed. I work with really huge number of files which are not only huge in number but some of them really huge in size. Minimum number of files could be 1 million to 2 million in one directory itself. Some of the files are even several Gigabytes in size. Like 10Gb 15Gb etc.

I have ext3 filesystem on my disk drive. Recently, that disk drive crashed. I am not sure why it crashed. But when I used to work and execute programs the hard disk used to take lots of time to respond. Even find and du -h commands used to take lots of time to show results.

I would request any one of you to kindly advice me as to why the disk drive crashed?

Is it because:

  1. The filesystem ext3 is not suited for such large number of files?

  2. Is it because there are too many files and I should have organized those files in several directories to reduce the load on Hard Disk's cylinders?

  3. I've read that XFS file system is well catered to large number of files. Should I use XFS filesystem on my new hard drive?

  4. The hard disk should be around 3 years old.

The machine is really good with 50GB RAM and 16 core processor.

EXT3 is good only if not have anything from production. For serious purpose I'm going to recommend XFS because I have good experience with that FS.

1 Like

I'd also recommend XFS. Just be very careful - make SURE your system is up-to-date, and if you're building your filesystem(s) on top of LVM, make doggone sure your kernel's LVM implementation supports file system barriers.

XFS kind of EXPECTS its barrier requests to work, not be silently ignored like LVM used to do.

Just Google "Linux file system barriers".

(Actually, it's the Linux devmapper underneath LVM that used to silently ignore barrier requests - given how long Linux has been around, that's only been fixed very recently...)

1 Like