hacmp and disaster recovery question

Hi Guys,

is it possible to failover a hacmp cluster in one datacentre via SRDF to a single node in another datacentre, or do I need a cluster there in any case? This is only meant as worst case scenario and my company doesn't want to spend more money than absolutely necessary.

I know the better solution would be just to include the other side into the cluster, but this is not possible for different reasons.

Kind regards
zxmaus

As far as i know SRDF is only a data replication facility, not a failover mechanism.

A (HA-)cluster makes *systems* highly available, specifically IP interfaces and applications. Usually data are *not* made highly available by cluster means, because data are shared between cluster nodes. Data are made highly available by mirroring, RAID-5, etc., but not by cluster means.

This means that data replication is something not in the scope of HACMP in first place and therefore not suitable for somehow substituting cluster facilities.

Having said this: how about building a "semi-automatic failover" solution in script, which does the following:

  • monitor the clusters application IP to be online
  • once it is not online any more:
    • roll forward the replication to the latest possible point in time, then
    • mount the (hopefully complete) data on your spare system
    • start the application on the spare system
    • initialize an unused interface with the clusters IP address
    • start serving the app via that IP interface.

There will be all kind of details to cover, which even might make this solution impossible - synchronisation problems, a procedure for taking back the app once the original cluster systems come back online, etc.. Still it is worth a try to explore if such a setup might be possible. It is kind of a "clustering for the real poor", so to say.

I hope this helps.

bakunin

Hi Bakunin,

many thanks for your reply. I was not thinking about an automated DR system at all since the SAN team needs to failover the SRDF anyway in this case - on request. I am rather interested if I could bring up the storage afterwards on the idle system in a non-clustered scenario (read: a normal single node) or if the storage expects a cluster on the other site too, to come up properly and allow me to manually mount the filesystems and bring up the databases. I cannot see any reason why it should not work but as stated in another thread - I have no experience at all yet with hacmp - so rather be safe than sorry :slight_smile: Probably I just have to try it ...

Many thanks and kind regards
zxmaus

It is possible but not out of the box and an automated solution would be pretty complicated. That in combination with High Availability is bad. You'd need a lot of scripts to check the environment so that that standalone node integrates into the HACMP cluster in all normal and abnormal situations. Furthermore you'd hardly get any support. So don't do that. Use a three node cluster between your two data centres or use a two node cluster in one.