IPMP on Private interconnects

I have a Oracle dbase running in a Solaris 10 cluster and have two private interconnects that are being used for communication. Is there any way to connect these two interconnects together using IPMP for redundancy? I've made several attempts with no luck so far and was wondering if anyone had any ideas?

the interconnect IS redundant... therefore there are TWO private interconnects.

1 Like

What i'm trying to accomplish is described below:

IPMP group provides highly available NIC for the cluster private
interconnect traffic. IPMP offers availability without any performance overhead.
Oracle RAC can be configured to leverage one or more than one NIC for its private network to scale the private network traffic by using active-active IPMP group configuration.

where is this from? i've never used a IPMP group for cluster private interconnect... only for public LAN.

You said

What issue did you encounter? I did ipmp on our oracle RAC cluster (test environment) private interconnects and i believe it worked since i didn't hear any complains from our dba.

ok, it's not a solaris cluster it is an oracle RAC. so you try to setup an ipmp group to use as as an interconnect for the RAC nodes...
as Mack1982 allready asked, where are your problems? the paper you provided gives examples on how to setup ipmp...

The problem that I'm having is that I've setup IPMP on the private interconnects and, when testing the setup by pulling one of the Ethernet cables the database stays up, when i pull the second cable the dbase loses communication.

after you pluged in the first one, i guess... post your configuration...

e1000g0 primary public interface setup with ipmp sc_ipmp0
e1000g1 secondary public interface setup with ipmp sc_ipmp0

e1000g2 private interconnect on 180 subnet
e1000g3 private interconnect on 180 subnet

come on... you have to post the config files, error messages and and and and... if you don't give ALL information, how can we help?

This is the output from ifconfig -a for the private interconnects that I want to implement IPMP on:

nge0: flags=1008843<UP, BROADCAST, RUNNING, MULTICAST, PRIVATE, IPV4> MTU 1500 INDEX 5
inet 180.x.x.x netmask ffffff80 broadcast 180.x.x.x

nge1: flags=1008843<UP, BROADCAST, RUNNING, MULTICAST, PRIVATE, IPV4> MTU 1500 INDEX 5
inet 180.x.x.x netmask ffffff80 broadcast 180.x.x.x

and this output should tell us what?

This output is from one of my database servers that I want to implement IPMP on. The two NICs are the private interconnect between servers and, I want to use IPMP for redundancy. During testing, I've seen where if disconnect the port that houses nge0 then the database will stay up and running. If i disconnect the port that houses nge1 then the database craps out. What I'd like to do is setup IPMP so that if nge1 goes down then nge0 immediately takes over the load.

I used this tutorial (from c0t0d0s0.org) to implememt IPMP (see probe method) for our Oracle(Database) systems. It did not work like we thought so we implemented link aggregation (which also requires some coordination with your network peeps.)

Good luck! :slight_smile:

Okay, it looks as if i was going down the wrong path after all. It appears that the issue is tied to Sun cluster rather than to Oracle RAC and IPMPing those two private interfaces. For testing purposes on node A, we pulled nge0 and the system stayed up and running (oracle and Solaris), when we plugged nge0 back in and unplugged nge1 we received the same results. On the second node in the cluster, Node B, we attempted the same test and found that pulling nge1 caused the node to evacuate. It appears that there is a configuration issue with Sun Cluster on Node B and that the private interconnects are working as they should.

Hi mack1982,

Can you please explain me about how to achieve a fully redundant private network with 2 dedicated switches (so that neither the port, nor the cables, nor the switches become a single point of failure)?

We connect eth2 from both nodes to switch1 and eth2 from both nodes to switch2. So 2 completely redundant paths. How should we proceed from here? We configure IPMP on these 2 interfaces. But it does not look correct.
So should there be a cross or straight cable between switch1 & switch2?
Should the IP Address assigned to 4 ports (eth2 & eth3 on both nodes) should belong to the same LAN or different LAN? i.e. all 4 from 192.168.2.x or eth2 has to be from 192.168.2.x and eth3 has to be from 192.168.3.x? If 2 diff. LAN then will IPMP work on it? If 1 LAN then we would need a cross cable between switches to make it 1 network else all 4 IPs cant ping/see each other. So as you see, i am confused and it would be great if you can help me understand this.

many thanks.

  • Connect eth2 from both nodes to switch 1
  • Connect eth3 from both nodes to switch 2

Now you have two paths available

  • Enter the IP address of eth2 in the /etc/hosts file
    example:
    server-priv1 10.10.10.10

  • edit the /etc/hostname.eth2 as follows:
    server-priv1 netmask + broadcast + group ipmp0 up

  • edit the /etc/hostname.eth3 as follows:
    group ipmp0 up

Reboot the server
Note: Do not assign IP to the standby interface.
Note: Assuming you are using solaris10

use the following command to test ipmp via CLI

#if_mpadm [-r] [-d] <interface>

  • -r -> revert back
  • -d -> disable i-e failover