Bonding IEEE 802.3ad Dynamic link aggregation : Bond showing less than desired throughput

Hi All,
I have done IEEE 802.3ad Dynamic link aggregation bond configuration with name bond0 which has 4 slaves (each 25GB/s) in it on cent os 6.8. Issue i am facing is bonding throughput is only 50GB/s not 100GB/s. below are the configuration files :

DEVICE=bond0
IPADDR=xx.xx.xx.xx
NETMASK=yyy.yyy.yyy.yyy
GATEWAY=zz.zz.zz.zz
USERCTL=no
BOOTPROTO=none
ONBOOT=yes
BONDING_OPTS="mode=4 miimon=100 xmit_hash_policy=1"
MTU=9000
IPV6INIT=yes
IPV6ADDR=xxxx:xxxx:xxxx:xxxx:xxxx
#IPV6_DEFAULTGW=yyyy:yyyy:yyyy:yyyy
NM_CONTROLLED=no

TYPE=Ethernet
BOOTPROTO=static
DEVICE=eth0
ONBOOT=yes
MASTER=bond0
SLAVE=yes
NM_CONTROLLED=no
HWADDR=9C:DC:71:47:3C:90
UUID=6e157026-ea38-4299-a183-68159ed86236

TYPE=Ethernet
BOOTPROTO=none
DEVICE=eth1
ONBOOT=yes
MASTER=bond0
SLAVE=yes
NM_CONTROLLED=no
HWADDR=9C:DC:71:47:3C:91
UUID=11d180a3-e481-4372-83a3-8fec06fb1425

TYPE=Ethernet
BOOTPROTO=none
DEVICE=eth2
ONBOOT=yes
MASTER=bond0
SLAVE=yes
NM_CONTROLLED=no
HWADDR=9C:DC:71:47:9B:60
UUID=7b9812ac-9926-417a-9825-6c1f3f285117

TYPE=Ethernet
BOOTPROTO=none
DEVICE=eth3
ONBOOT=yes
MASTER=bond0
SLAVE=yes
NM_CONTROLLED=no
HWADDR=9C:DC:71:47:9B:61
UUID=5b0d4d31-0c7e-4fce-b1ab-87111b4f7942

also below is the output of /proc/net/bonding/bond0 which shows number of ports only 2 not 4 :

Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)

Bonding Mode: IEEE 802.3ad Dynamic link aggregation
Transmit Hash Policy: layer3+4 (1)
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0

802.3ad info
LACP rate: slow
Min links: 0
Aggregator selection policy (ad_select): stable
Active Aggregator Info:
        Aggregator ID: 9
        Number of ports: 2
        Actor Key: 1
        Partner Key: 65
        Partner Mac Address: 00:d7:8f:7f:5d:8b

Slave Interface: eth0
MII Status: up
Speed: 25000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 9c:dc:71:47:3c:90
Aggregator ID: 9
Slave queue ID: 0

Slave Interface: eth1
MII Status: up
Speed: 25000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 9c:dc:71:47:3c:91
Aggregator ID: 10
Slave queue ID: 0

Slave Interface: eth2
MII Status: up
Speed: 25000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 9c:dc:71:47:9b:60
Aggregator ID: 9
Slave queue ID: 0

Slave Interface: eth3
MII Status: up
Speed: 25000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 9c:dc:71:47:9b:61
Aggregator ID: 10
Slave queue ID: 0

also while making bond up i am getting below error :

# ifup bond0
Device eth1 has different MAC address than expected, ignoring.
Unable to start slave device ifcfg-eth1 for master bond0.
Device eth2 has different MAC address than expected, ignoring.
Unable to start slave device ifcfg-eth2 for master bond0.
Device eth3 has different MAC address than expected, ignoring.
Unable to start slave device ifcfg-eth3 for master bond0.

Please suggest to find out the root cause of it.

---------- Post updated at 08:42 AM ---------- Previous update was at 02:10 AM ----------

Any solution ... kindly suggest

I am not an expert with link aggregation, maybe nobody here is.
It seems there is aggregated eth0 with eth2, and eth1 with eth3.
The only unusual setting is the BOOTPROTO=static for eth0.
You have set xmit_hash_policy=1 .
Maybe you should change to xmit_hash_policy=layer3+4 , then two hosts can use more interfaces if more services (ports) are involved. But the other way is controlled by the LAN switch.
And - maybe everything works as designed, according to this article.