Cluster Suite IP-Aliasing

Hi,

is it normal, that the IP alias (service IP) can't be seen with ifconfig -a , as eth0:1 for example

the IP is on the node, you can ping it, and open ports for that IP

look at this:

[root@uscltest3 ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen 1000
    link/ether 00:50:56:95:13:2c brd ff:ff:ff:ff:ff:ff
    inet 10.200.218.82/24 brd 10.200.218.255 scope global eth0
    inet 10.200.218.85/24 scope global secondary eth0
    inet6 fe80::250:56ff:fe95:132c/64 scope link 
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen 1000
    link/ether 00:50:56:95:2e:f5 brd ff:ff:ff:ff:ff:ff
    inet 192.168.0.82/24 brd 192.168.0.255 scope global eth1
    inet6 fe80::250:56ff:fe95:2ef5/64 scope link 
       valid_lft forever preferred_lft forever
4: sit0: <NOARP> mtu 1480 qdisc noop 
    link/sit 0.0.0.0 brd 0.0.0.0
5: virbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue 
    link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
    inet6 fe80::200:ff:fe00:0/64 scope link 
       valid_lft forever preferred_lft forever
[root@uscltest3 ~]# ifconfig -a
eth0      Link encap:Ethernet  HWaddr 00:50:56:95:13:2C  
          inet addr:10.200.218.82  Bcast:10.200.218.255  Mask:255.255.255.0
          inet6 addr: fe80::250:56ff:fe95:132c/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:40389 errors:0 dropped:0 overruns:0 frame:0
          TX packets:21471 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:12202049 (11.6 MiB)  TX bytes:9060670 (8.6 MiB)
          Base address:0x2000 Memory:d8920000-d8940000 

eth1      Link encap:Ethernet  HWaddr 00:50:56:95:2E:F5  
          inet addr:192.168.0.82  Bcast:192.168.0.255  Mask:255.255.255.0
          inet6 addr: fe80::250:56ff:fe95:2ef5/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:213 errors:0 dropped:0 overruns:0 frame:0
          TX packets:56 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:20831 (20.3 KiB)  TX bytes:10360 (10.1 KiB)
          Base address:0x2040 Memory:d8940000-d8960000 

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:16436  Metric:1
          RX packets:5927 errors:0 dropped:0 overruns:0 frame:0
          TX packets:5927 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:7709874 (7.3 MiB)  TX bytes:7709874 (7.3 MiB)

sit0      Link encap:IPv6-in-IPv4  
          NOARP  MTU:1480  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:0 (0.0 b)  TX bytes:0 (0.0 b)

virbr0    Link encap:Ethernet  HWaddr 00:00:00:00:00:00  
          inet addr:192.168.122.1  Bcast:192.168.122.255  Mask:255.255.255.0
          inet6 addr: fe80::200:ff:fe00:0/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:54 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:0 (0.0 b)  TX bytes:10680 (10.4 KiB)

Not really. How did you set up your virtual interface?

with the lucy http interface

defined a resource and put it to the service, I think there were ip and netmask to configure, not more

I'll post my cluster.conf tomorrow

do you have a red hat cluster with a service ip, where you can see the ip with ifconfig?

---------- Post updated 06-05-10 at 09:55 ---------- Previous update was 05-05-10 at 19:33 ----------

I worked with AIX hacmp and SUN Cluster previously, and I must say the Red Hat Cluster Suite seems to be very buggy

here is the cluster.conf

[root@uscltest2 ~]# cat /etc/cluster/cluster.conf
<?xml version="1.0"?>
<cluster alias="uscltest" config_version="38" name="uscltest">
        <fence_daemon clean_start="0" post_fail_delay="0" post_join_delay="3"/>
        <clusternodes>
                <clusternode name="uscltest3" nodeid="1" votes="1">
                        <fence>
                                <method name="1">
                                        <device name="scsi-fench" node="uscltest3"/>
                                </method>
                        </fence>
                </clusternode>
                <clusternode name="uscltest2" nodeid="2" votes="1">
                        <fence>
                                <method name="1">
                                        <device name="scsi-fench" node="uscltest2"/>
                                </method>
                        </fence>
                </clusternode>
                <clusternode name="uscltest1" nodeid="3" votes="1">
                        <fence>
                                <method name="1">
                                        <device name="scsi-fench" node="uscltest1"/>
                                </method>
                        </fence>
                </clusternode>
        </clusternodes>
        <cman/>
        <fencedevices>
                <fencedevice agent="fence_scsi" name="scsi-fench"/>
        </fencedevices>
        <rm>
                <failoverdomains>
                        <failoverdomain name="domain3" nofailback="0" ordered="1" restricted="1">
                                <failoverdomainnode name="uscltest3" priority="1"/>
                                <failoverdomainnode name="uscltest2" priority="2"/>
                                <failoverdomainnode name="uscltest1" priority="3"/>
                        </failoverdomain>
                        <failoverdomain name="domain2" ordered="1" restricted="1">
                                <failoverdomainnode name="uscltest1" priority="1"/>
                                <failoverdomainnode name="uscltest2" priority="1"/>
                                <failoverdomainnode name="uscltest3" priority="2"/>
                        </failoverdomain>
                        <failoverdomain name="domain1" ordered="1" restricted="1">
                                <failoverdomainnode name="uscltest1" priority="1"/>
                                <failoverdomainnode name="uscltest3" priority="3"/>
                                <failoverdomainnode name="uscltest2" priority="2"/>
                        </failoverdomain>
                </failoverdomains>
                <resources>
                        <ip address="10.200.218.85" monitor_link="1"/>
                        <lvm lv_name="test3lv" name="test3lvm" vg_name="test3vg"/>
                        <fs device="/dev/test3vg/test3lv" force_fsck="0" force_unmount="1" fsid="921" fstype="ext3" mountpoint="/cluster/test3" name="test3fs" self_fence="0"/>
                        <script file="/cluster/test3/httpd/bin/apachectl" name="test3_http"/>
                </resources>
                <service autostart="1" domain="domain3" exclusive="0" name="test3" recovery="relocate">
                        <ip ref="10.200.218.85">
                                <lvm ref="test3lvm">
                                        <fs ref="test3fs">
                                                <script ref="test3_http"/>
                                        </fs>
                                </lvm>
                        </ip>
                </service>
        </rm>
</cluster>

Short answer: this is perfectly normal. What happened was that an IP address was added to the interface without a differentiating label.

Believe it or not, the "ifconfig" application is strongly discouraged because it has a tendency to dramatize situations like this. :stuck_out_tongue: It would probably have been removed years ago if not for legacy support.

Anyway, if you want to understand a bit more about what is happening, take a look at this iproute2 documentation. They explain that although all alias addresses are secondary addresses, the inverse is not necessarily true.

Then try the following:

[root@uscltest2 ~]# ip addr show dev eth0
[root@uscltest2 ~]# ip addr add 10.200.218.86/24 broadcast 10.200.218.255 dev eth0
[root@uscltest2 ~]# ip addr
[root@uscltest2 ~]# ifconfig
[root@uscltest2 ~]# ip addr add 10.200.218.87/24 broadcast 10.200.218.255 label eth0:alias dev eth0
[root@uscltest2 ~]# ip addr
[root@uscltest2 ~]# ifconfig

You will notice that after the first add, the output of "ip addr" showed the address, but the output of "ifconfig" did not. Additionally, when you added a label the second time, the output of "ip addr" still shows the address, but the output of "ifconfig" also show our new label. (which only says "eth0:alias" because we told it to - it could literally be anything)

After this little demo, reset things to normal:

[root@uscltest2 ~]# ip addr del 10.200.218.86/24 dev eth0
[root@uscltest2 ~]# ip addr del 10.200.218.87/24 dev eth0
1 Like

you are right jjinno, thank you for this hint

I wonder why the don't add a label when applying the alias from the cluster suite, I guess I'm not the only one who is confused about that :slight_smile: