EXT3 Performance tuning

Hi all,

long time ago I posted something, but now, it is needed again :frowning:

Currently, I am handling with a big NFS Server for more than 200 clients, this sever has to work with 256 NFSDs. Because of this huge amount of NFSDs, there are thousands of small write accesses down to the disk and causing a high wait i/o :frowning:

Now, i hope to be able to tune the ext3 filesystem in that way, that the wait i/o could be decreased, I even don't need to completely get rid of it...

Firstly I planed to move the journal to a different location, but because this is a shared disk within a HA failover configuration, both systems have to have the journal on the shared disk.
The next idea was to tune the fs via the commit rate within the fstab file, but during my tests I saw no performance increasement of the filesystem... the best performance I git was the default setting, which is every 5s.

So, I don't have a clue how to tune the fs further to decrease the wait i/o , do you have any ideas ?

Thanks in advance

Malcom

What kind of storage are you running the fs on?
/Peter

Hi Peter,

it is a Raid 5 system connected to both systems and contains important configuration data and 3rd party applications including the data.

The systems are working with heartbeat, and the shared disk will be taken over by the next system.

/malcom

I am not quite sure whether we are on the right track, without further metrix posted. Even a default ext3 config is not likely to be the bottleneck of this wait i/o issue. What about RAID strip unit size? What about NFS options (noac, cache, rsize, wsize, etc)? Especially noac. I am quoting related section from nfs man page:
----------------------------------------------------------------------
noac Disable all forms of attribute caching entirely. This
extracts a server performance penalty but it allows two
different NFS clients to get reasonable good results
when both clients are actively writing to common
filesystem on the server.
-------------------------------------------------------------------------