Steps to configure Oracle 10g on VCS

Anyone has any doc or ideas on how to bring a oracle mount point into VCS control and how to setup the startup scripts etc? The mount points are using VxVM. I know very basic commands on VCS failover, freezing of resource group etc, but not good in depth knowledge of such. Its urgent. Please help. Thanks.

Configuring a cluster without detailed knowledge will turn "HA" into "HAHA" which is german for "LOL"
If you want to do anyway:
open you cluster gui, select you service group, select the resoures tab, right click on the window ; select add resouce; give an name and select mount as resouce type. fill in al least bold values. deselect critical; select enabled; and save. the new resouce will be displayed. right click the resource to test online and offline. link dependencies. pray

Can you provide the command line? I will be more confident using command line:)
And can you comment on which are the relevant points where I can use from this blog? http://ssuen.wordpress.com/

Hi guys, can you help me out?10x

Some general ideas.

Well firstly before you cluster anything make sure your application can run on it's own without any problems ie make sure it's not buggy or crashes often, if it does crashes or problematic never ever cluster it, resolve all the problems first before clustering.

before clustering test out the sequence manually

ifconfig ce0:1 plumb
ifconfig ce0:1 xxx.xxx.xxx.xxx netmask 255.255.255.0 <<<--- usually the clusterservice group IP is also the IP that oracle listener uses

vxdg import <diskgroup>
vxvol -g <diskgroup> startall
mount /dev/vx/dsk/cXtXdXsX /oracle
mount /dev/vx/dsk/cXtXdXsX /oracle/u01
mount /dev/vx/dsk/cXtXdXsX /oracle/u02
mount /dev/vx/dsk/cXtXdXsX /oracle/u03
<oracle database starts with dbstart>
<listener starts lsnrctl start>

stop the application and umount in reverse
<listener stop lsnrctl stop>
<dbshut>
umount /dev/vx/dsk/cXtXdXsX /oracle/u03
umount /dev/vx/dsk/cXtXdXsX /oracle/u02
umount /dev/vx/dsk/cXtXdXsX /oracle/u01
umount /dev/vx/dsk/cXtXdXsX /oracle
vxvol -g <diskgroup> stopall
vxdg deport <diskgroup>
ifconfig ce0:1 down

Then on the other host try to bring up manually also using the sequence, make sure everything working without problem before you cluster anything.

I have very limited knowledge, some years back I migrated 4 node cluster into 2 node cluster merging a total of 38 oracle instances, how I did it was by a lot of pain and much trial and error and many change window (so painful) and I did it mostly by vi and editing the main.cf file, you can use vcs command line or easier use the gui. if you want to use command line because you dun have a console then better start checking up support.veritas.com, if you have a lot of instances to add then I feel it is easier to spend some time studying the main.cf and how you can go about modifying it to suite your needs.

then hacf -verify to check your main.cf script ( as long as the cf file check out without errors after modification it's fine)

Because everytime vcs starts it will read main.cf and automatically convert into main.cmd which contains vcs commands.

but if you still interested in command line, then here are some ideas.

How to add a new diskgroup service group to VCS.

best of luck

I recommend using ASM vs Veritas.

  1. ASM is free with Enterprise Edition
  2. It's the way Oracle is going. There might come a time when ASM is required for managing discs

Thanks for your inputs guys.