Veritas Cluster automatic fail-back option on Solaris

Hi - Please help me to understand the Veritas Cluster fail-over capability.
We configured oracle database file system on veritas cluster file system and it is automatically failing-over from node 1 to node 2.

Does Veritas cluster softward have any option to fail-back from node 2 to node 1 automatically?We need automatic option to fail-back.Right now we are failing back manually.

Solaris version :5.10

Thanks in advance.

I never worked with Veritas but I have some experience with other clustersoftware.
To me automatic fallback sounds dangerous. When node 1 goes down you most likely have some manual intervention to fix the cause of the failover. If node 1 would go up automatically without anyone checking it there is a significant chance that it will fail again for the same reason it initially failed. This would result in your database filesystem moving back and forth between the nodes...

Thanks Cero.It perfectly make sense.Do I have any option to setup for instance fail-back after certain time may be 30 days or 40 days etc...or execute ping command and if response is positive then fail-back etc...?The olny objective is avoid human intervention to fail-back faile system.Thanks

Why? As already stated, this is probably dangerous. Just pinging something won't tell you how well it's working. Do you really want your failover going down because it thinks your original server might be okay, and because of that, taking itself down unsupervised?

Automatic option is RAC.

Other then that, don not use it.

---------- Post updated at 09:44 AM ---------- Previous update was at 09:44 AM ----------

Automatic option is RAC.

Other then that, don not use it.

VCS won't do an automatic failback. VCS looks at the cluster as active/active or active/passive. It doesn't care about node 1 and node 2 etc. If your servicegroups are on-line on one node and it fails over to the passive node, it becomes the primary. Your servicegroups will stay there until they are failed back to the original node, usually manually. One of the main problems when people configure clusters is they look at the nodes as if servicegroups "should" be on one node or another. The optimal design is that it should not matter which node is active. If does matter, you lose the value of having VCS.

We've never done (nor advocated) automatic failback in VCS. A failback represents another outage, however brief it might be. If you have a preferred node in the cluster to run a given service group (app), you can initiate a group switch at a time that's convenient as opposed to the cluster just deciding to switch it over when the failed node returns to the cluster.

Automatic failback = bad. Bad. BAD! :slight_smile: