HACMP: set preferred adapter for service ip

Hi,

lets say we have two boot interfaces en0 en1

and two resource groups, with two service ips, sip1 and sip2, one persistent ip pers1
both persistend and service ips are applied as ip-alias

when I start the cluster, and bring the resource groups up, it looks like

en0: sip1 and sip2

en1: pers1

but I want sip1 and sip2 to be on different adapters, if possible, is there some kind of preferred adapter configuration for hacmp service ips?

cheers funksen

Starting with HACMP 5.1 you can configure distribution policies for the placement of Service Labels. Default from then on was Anti-Collocation which should distribute the Service Labels across all non-service IP labels to distribute load. So I would first check what distribution policy your cluster is using. If your Service Label was configured to anything but Anti-Collocation you just need to change it back to the default.

Hi Shockneck, thank you for the info, but where can I configure the distribution policy?

perhaps the distribution policy includes the persistent IP too?

The corresponding SMIT panel is in the HACMP Extended RGs Configuration menu - Configure Resource Distribution Preferences. With 5.4. the fastpath is "cm_change_show_service_ip_distribution_preference_select". Furthermore cllsnw -c and cltopinfo -w should display the policy if it is not set to default.

Distribution policy includes the persistant IP but per default the cluster's main focus should be put on the load distribution of the Service Labels. I.e. if the policy is set to default (Anti-Collocation) but your cluster behaves as if "Anti-Collocation with Persistant" was configured this is not what one would expect. Such a problem may be caused by the clusters Topology. In this case please post the output of
# cltopinfo -w
and
# netstat -rn

the policy is set to Anti-Collocation, thank you for this hint!

I've found one mistake in the configuration, perhaps that's the problem. I changed the boot ip addresses a few weeks ago, which worked fine. But I didn't change the "communication path to node" in the node configuration screen in smit, it still shows the old boot ip. I don't know the purpose and the impact of this setting, maybe it's just for rsh access. Smit help says "This path will be taken to initiate communication with the node."

Cluster communication and clrsh both are working. I made several takeover tests.

I have to change the communication path during the next downtime of the system, then I can tell more.

cheers
funksen