[solaris]TCP clnt_max_conns testing tuning

Hi

I am looking to tweek the rpcmod:clnt_max_conns to take advantage of extra bandwidth. I am running

iperf -c <server_ip> -t 1800 -i 10

to check the throughput before and after I change the value in /etc/system,
but not seeing any throughput improvement.

How should I be testing this? Any advice on this would be great
Thanks

I changed the thread title - because this appears to be a solaris only question.

Anyway, are you having NFS perfomance issues? Please describe what the symptoms are.

Otherwise tweaking this parm is not going to do perceptibly very much positive. It just wastes resources, IMO. Sometimes it looks like a great idea to take a tunable kernel parm from some low number to a higher one. These kinds of adhoc changes can actually slow things down.

Does iostat show that asvc_t times are large on NFS mounts? Plus, be sure you DID NOT mount NFS drive directly off /, because if the NFS drive has problems/issues, commands like pwd and the underlying system calls may hang. They have to traverse /.

If this sounds all wrong, then, you need to explain your problem fully.

Suggested reading:

http://docs.oracle.com/cd/E19683-01/806-7009/chapter3-26/index.html

Note the performance caveat at the bottom of the page.

Hi

This tunning parameter was handed to me to implement as we have moved to 10GE. Our environment is using LACP and the thought was that to take advantage of the extra bandwidth and ensure that the streams get spread across all links to up the value, however I cannot seem to verify any performance improvements with just using iperf, and I am thinking that maybe I should be measuring this in some other way

I have an m4000 with 6 10GB nics. With link aggregation. We have clnt_max_conns=1 . If you have the correct drivers you normally do not need to tweak this. It is meant for a really fast system connected to lots of slower or very remote boxes (with lots of latency) to enable multiplexing. It handles the disparity in local response times. Like multipathing.

Try loading and then measuring:

Maybe the NFS traffic load across systems is not high enough to push the new connection(s). Or your code base is really well-written. Unlikely, I admit. Ours is awful.

So - artificially increase the NFS traffic by writing some threaded code that reads (or writes) many files at one time. Be sure to do something to avoid the effects of local file caching, or what you will really be measuring is not NFS TCP effects. This will not fly on a production box. You should have svc_t waits of 50ms+ (as a guess) if you are really saturating the connection/driver.

IMO, what you want to look at is iostat -xnmz 1 10 - svc_t time, %b, etc.
-- for NFS mounts. iperf is also relevant, BTW. It is a valid tool.

If your before/after change does not impact this kind test, you are wasting your time with the change.