CIO/DIO and JFS2 read ahead

Hi Guys,

I wonder if after enabling CIO/DIO at the filesystem level and assuming that CIO/DIO will bypass the JFS2 read ahead available when not using CIO/DIO my questionis what parameters I can play with to tune/improve the CIO in order to obtain similar performance for sequential reads ( backups or big scans). I believe that while using file system caching/buffering read ahead plays a huge performance role. So any advise on how/what parameters I should consider modify in order to improve the CIO/DIO performance for large sequential reads.

Thanks in Advance.

Harby.

AFAIK there is no such thing with DIO/CIO to improve throughput of large sequential reads like jfs/jfs2 do with their read-ahead option. I.e. DIO/CIO even may deteriorate performance of applications that read data most of the time.

Maybe have a look in this and find something useful on the topic:

Thanks but I have read that document already. Sometimes is better hear from someone else's experiences. I thought I would be worthwhile asking.

Furthermore to this topic so basically for a Warehouse Database server with databases bigger than 500 Gigs and where a lot of large sequential reads are necessary CIO won't add any benefit to it in terms of performance?.

As Shockneck said, CIO/DIO is said to be better for random reads/write afaik.

You could check with vmstat -v if there are still any blocked buffers counting up and tune them with the appropriate ioo parameters.
Also you can check about aio, if there is any usage of it or something close to it's limits with "iostat -A" and also the shift+a option in nmon, to see how many aioservers are being used currently.

Edit:
Ah I see we had a foregoing discussion already here
http://www.unix.com/unix-advanced-expert-users/79602-j2\_maxpagereadahead-j2_nbufferperpagerdevice-cio.html

Are you sure you need cio? You hadn't shown the "vmstat -v" back then still.

There is also the possibility to change attributes for FC adapter (like lg_term_dma, max_xfer_size and num_cmd_elems) and disks (like queue_depth or max_coalesce) with chdev and reading them out with lsattr. But before doing that, I would start to sort things out with AIO if appropriate and vmstat -v/iostat.

pstat -a | grep [[-c]] aio

does the same - display/count the used asynchronous I/O servers. Note, though, that you have to deduct 1 from the result (the server process). Also note the correclation with the number of CPUs: The number you enter on the "chdev"-command line with the "maxservers" attribute is the number of servers per CPU. On an 8-way system the command

chdev -l aio0 -P -a maxservers=10

will cause (a maximum of) 81 aio servers (1 server process and 80 worker processes) to come into life.

I hope this helps.

bakunin

pstat -a shows, like for example ps aux, only the number of current started aioservers, but not if they are busy or not. With nmon shift+a you see how many are really currently busy and this is on system with an app using AIO, going up and down. Often the maxreqs is forgotten to be configured somewhat large too so the app doesn't get errors regarding to hit the max of queued AIO requests (some "error 5" in Oracle for example).

Nevertheless, we will have to check out if the OP needs AIO at all, which he could and should find out. Maybe he uses it already, no idea. Maybe he offers us some more info to help him.
In our environments I used CIO as a kind of last ressort of tuning, when everything else with AIO and ioo tuning didn't help anymore.