aix 61 and oracle 11

Hi,

we need to upgrade from aix 5.3 oracle 9 to aix 6.1 oracle 11.
Our base interest is on vmo and disk setup

On aix 5.3 we change lru_file_repage to 0.

We split disk layout ( emc cx4 ) to tree volume groups:

  • one for data base files ( one filesystem over two lun/hdiskpower, block size 4096, mount as cio)
  • one for redologs ( two filesystemseach ovet one lun/hdiskpower, block size 512, mount as cio )
  • one for archivelogs ( one filesystem over two lun/hdiskpower, block size 4096, no cio mount)

aio setup is as follows:

  • minimum 100, max per cpu 500, max requests 32k

So what to do on aix 6.1 ( emc dmx )?

  • in several docs aix 6.1 is database ready out of the box. aio does need to be changed cio too.
  • is this disk layout corect? do we need to change anything? increase number of luns under database files?

Regards

From 5.3 to 6.1 there hasn't changed that much (in this regard), so everything said for tuning AIX 5.3 is valid for AIX 6.1 too. You might want to consult this link from our list of useful links here.

You certainly will need to configure asynchronous I/O. I don't know Oracle 11 all that well, but i suppose you need to configure both legacy aio as well as posix_aio.

<the original part dealing with Asynchronous I/O deleted because it was outdated information>

Regarding the volume group layout: i'm not sure if this makes sense at all. ou don't need to put logs and application data into a several volume groups, in fact it would be easier to administrate if you had only one VG. You could (and should!) still put logs and data on as many disks as possible simultaneously to enhance performance, but that doesn't mean they should go in different VGs. VGs are logical containers to group logical volumes (roughly: file systems) together because they belong together logically.

There is a case to be made to put the archive logs in a separate VG: if you regularly use the data in another server (for backup purposes, for instance) you might consider putting these into a separate volume group to handle this. You could then umount the filesystem with the archive logs, export the VG, import it on another server, backup the data and reimport/remount it on the DB server.

If you don't do such a thing then the archive logs should become part of the DB VG either.

Regarding optimization for disk I/O: as i have already written optimizing (physical) disk I/O is mostly about distributing the read/write-operations over as many physical disks as possible. Further, you should only optimize on one place, because optimizing on several places simultaneously is likely not to enhance anything but runs the (small) risk of having adverse effects (see here). If you still need disk I/O optimization after having employed all possible options at the SAN level you could - very carefully! - try to optimize further on the LVM level, but it is more likely than not that the SAN will give you all the performance you need without having to resort to other means.

I hope this helps.

bakunin

Thanks for replay.

Lets say that we have 16 disks on storage. And we need one filesystem. Which setup is better regarding performans:

  1. We make one raid 10 array and create one lun and present it to aix as one hdiskpower disk ( emc storage ) and make one filesystem on that disk
    or
  2. We make 4 raid 10 groups ( 4 disks in each ), create one lun on each raid group and present them to aix like 4 hdiskpower disks, and then create one filesystem on that 4 hdiskpowers.

By my opinion second solution is better becouse aix see 4 disks and can paralelly write/read from them.

Regards

I just found out that my description of configuring Asynchronous I/O is wrong at all, as in AIX 6.1 there is no aio-pseudodevice any more. Please note that i have deleted the respective part of my earlier post. My apologies for the mistake.

bakunin

---------- Post updated at 01:49 PM ---------- Previous update was at 01:33 PM ----------

Lets take this to the extreme: you could create one LUN and present it to AIX or you could create 16 LUNs (one for each disk) and use AIX to stripe with LVM means over these. In the first case you let the SAN device do the distribution of the I/O load, in the latter the AIX system (namely the LVM) will do this. As it is, a SAN device is a specialized piece of hardware specifically designed to deal with these tasks. I'd bet that the SAN can do that better than AIX, so I'd stick with the SAN machine.

The smallest amount of disk space the LVM deals with is the Physical Partition (PP). If you have a database configured you will probably have one or more multi-gigabyte filesystem and accordingly your disks (and therefore the PP size) will be relatively big - 256MB being a typical value. "Striping" in 256-MB-chunks is probably not all too parallel at all, considering the size of disk clusters and the size of data packets in the SCSI protocoll. In fact your I/O will probably look like this on the device driver level: some (many?) requests to one disk, then some (many?) requests to the next disk, etc.. Not "one request to the first disk, one request to the next disk, ..." at all!

So the answer is: you will be probably better off leaving the I/O-optimization to the SAN device altogether and use the LVM only to logically group filesystems on a per-application (or probably per-Oracle-instance) basis.

I hope this helps.

bakunin

I have to disagree with bakunin ... if you have 4 luns coming from the SAN, than AIX will access the disks in parallel - if you have large tables or many threads / queries in parallel / IO is spread across the VG (I have DBs with tables spanning many 100 GB) - if you only have one lun than it will always be serial.
IBM DB performance specialists still recommend poor man striping for sybase and oracle DBs ... even for raw devices - if you have the option ... if you have only one lun, you obviously don't.
Kind regards
zxmaus

MY concern with Oracle is always that Oracle writes two sets of logs (origlog and mirrorlog) and these need to be held on separate devices to avoid them being subject to Logical corruption. The easiest way to achieve this is to put them on separate LUNs allocated to separate RAID arrays.