Solaris 9 ndd -set issues

Hello forum,
I have a Solaris 9 Sun Fire v240 server and Sun Fire v440
Recently we made changes and installed a new switch which both of them are connected to it. Cisco Catalyst 3750.

Now the Sun Server v240 is having problems with the network. It was supposed to run at 1000mbs speed.

ndd -get /dev/bge0 link_speed
100

I tried changing the settings by the following,

ndd -set /dev/bge0 adv_1000fdx_cap 1
ndd -set /dev/bge0 adv_1000hdx_cap 0
ndd -set /dev/bge0 adv_100fdx_cap 0
ndd -set /dev/bge0 adv_100hdx_cap 0
ndd -set /dev/bge0 adv_10fdx_cap 0
ndd -set /dev/bge0 adv_10hdx_cap 0
ndd -set /dev/bge0 adv_autoneg_cap 0

After that the link goes down.

ndd -get /dev/bge0 link_status
0

The switches are configured at gigabit speeds.
Now I am confused as to why this is happening. The v440 is running at 1000 speed and it is connected to the same switch as the v240.

I was wondering if any of you have faced this similar problem before.
Could it be the problem with the Ethernet cable? But we are using the same cable that was used on the old switch and it was running at 1000 speeds before.

Thank you very much for your inputs!

see if editing /kernel/drv/ce0.conf helps ... reboot the server as required ...

also, check that the switch port the v240 is attached to is actually set to 1000mbs and full duplex ... if yes and the issue is still the same, see if you can move the cable to another port on the switch ... some catalyst switches had a bug that killed a port when it had issues with the negotiation though i am unable to remember the exact cause ...

Rebooting the server now is not an option.
The Solaris machine does not have the ce0.conf file.
I asked someone to look at the Cisco switch and he says everything seems fine.
I do not have access to the site yet, guess I will have to ask someone to look at the cables.
any second guesses?

What's the patch level of your GigaSwift driver?
On Solaris 9, this is patch 112817-24 or higher.

showrev -p | grep 112817-
bash-2.05$ showrev -p | grep 112817-
Patch: 112817-29 Obsoletes:  Requires:  Incompatibles:  Packages: SUNWced, SUNWcedx
Patch: 112817-32 Obsoletes:  Requires:  Incompatibles:  Packages: SUNWced, SUNWcedx

I checked with my other servers which were running fine they have the same output.
Also here are the results for ndd -get
I don't know if it helps but it's set to auto_neg with the 1000 speeds enabled.

bash-2.05$ sudo ndd -get /dev/bge0 adv_1000fdx_cap
1
bash-2.05$ sudo ndd -get /dev/bge0 adv_1000hdx_cap
1
bash-2.05$ sudo ndd -get /dev/bge0 adv_autoneg_cap
1
bash-2.05$ sudo ndd -get /dev/bge0 autoneg_cap
1
bash-2.05$ sudo ndd -get /dev/bge0 1000fdx_cap
1

Leave the default, i.e. all capabilities enabled.
Check (compare with other system) if there are extra ndd commands:

grep -w ndd /etc/rc?.d/S*
crontab -l | grep -w ndd

If the defaults don't work, change the network cable!

This was the result of

/etc/rc2.d/S69inet:[ -z "$encr" ] || /usr/sbin/ndd -set /dev/tcp tcp_1948_phrase $encr
/etc/rc2.d/S69inet:     /usr/sbin/ndd -set /dev/tcp tcp_strong_iss $TCP_STRONG_ISS
/etc/rc2.d/S69inet:             /usr/sbin/ndd -set /dev/ip ip_forwarding 1
/etc/rc2.d/S69inet:             /usr/sbin/ndd -set /dev/ip ip_forwarding 0
/etc/rc2.d/S69inet:     /usr/sbin/ndd -set /dev/ip ip_forwarding 0
/etc/rc2.d/S69inet:             /usr/sbin/ndd -set /dev/ip ip6_forwarding 1
/etc/rc2.d/S69inet:             /usr/sbin/ndd -set /dev/ip ip6_send_redirects 1
/etc/rc2.d/S69inet:             /usr/sbin/ndd -set /dev/ip ip6_ignore_redirect 1
/etc/rc2.d/S69inet:             /usr/sbin/ndd -set /dev/ip ip6_forwarding 0
/etc/rc2.d/S69inet:             /usr/sbin/ndd -set /dev/ip ip6_send_redirects 0
/etc/rc2.d/S69inet:             /usr/sbin/ndd -set /dev/ip ip6_ignore_redirect 0
/etc/rc2.d/S69inet:     /usr/sbin/ndd -set /dev/ip ip6_forwarding 0
/etc/rc2.d/S69inet:     /usr/sbin/ndd -set /dev/ip ip6_send_redirects 0
/etc/rc2.d/S69inet:     /usr/sbin/ndd -set /dev/ip ip6_ignore_redirect 0
/etc/rc2.d/S69netconfig:ndd -set /dev/tcp tcp_conn_req_max_q0 8192
/etc/rc2.d/S69netconfig:ndd -set /dev/tcp tcp_ip_abort_cinterval 60000
/etc/rc2.d/S69netconfig:ndd -set /dev/ip ip_respond_to_timestamp 0
/etc/rc2.d/S69netconfig:ndd -set /dev/ip ip_respond_to_timestamp_broadcast 0
/etc/rc2.d/S69netconfig:ndd -set /dev/ip ip_respond_to_address_mask_broadcast 0
/etc/rc2.d/S69netconfig:ndd -set /dev/ip ip_ignore_redirect 1
/etc/rc2.d/S69netconfig:ndd -set /dev/ip ip_send_redirects 0
/etc/rc2.d/S69netconfig:ndd -set /dev/ip ip_forward_src_routed 0
/etc/rc2.d/S69netconfig:ndd -set /dev/ip ip_forward_directed_broadcasts 0
/etc/rc2.d/S69netconfig:ndd -set /dev/ip ip_forwarding 0
/etc/rc2.d/S69netconfig:ndd -set /dev/ip ip_strict_dst_multihoming 1
/etc/rc2.d/S69netconfig:ndd -set /dev/arp arp_cleanup_interval 60000
/etc/rc2.d/S69netconfig:#ndd -set /dev/ip ip_ire_flush_interval 60000
/etc/rc2.d/S69netconfig:ndd -set /dev/ip ip_respond_to_echo_broadcast 0
/etc/rc2.d/S69netconfig:ndd -set /dev/ip ip_respond_to_echo_multicast 0
/etc/rc2.d/S69netconfig:ndd -set /dev/tcp tcp_rev_src_routes 0
/etc/rc2.d/S69netconfig:ndd -set /dev/ip ip_enable_group_ifs 0
/etc/rc2.d/S69netconfig:ndd -set /dev/tcp tcp_xmit_hiwat 65535
/etc/rc2.d/S69netconfig:ndd -set /dev/tcp tcp_recv_hiwat 65535
/etc/rc2.d/S69netconfig:ndd -set /dev/tcp tcp_cwnd_max 65535
/etc/rc2.d/S69netconfig:ndd -set /dev/tcp tcp_strong_iss 2
/etc/rc2.d/S69netconfig:ndd -set /dev/tcp tcp_conn_req_max_q 1024
/etc/rc2.d/S69netconfig:ndd -set /dev/tcp tcp_conn_req_max_q0 8912
/etc/rc2.d/S69netconfig:ndd -set /dev/udp udp_max_buf 1048576
/etc/rc2.d/S69netconfig:ndd -set /dev/udp udp_recv_hiwat 65535
/etc/rc2.d/S95IIim:     is_priv_port=`/usr/sbin/ndd /dev/tcp tcp_extra_priv_ports | /usr/bin/grep -w 9010`
/etc/rc2.d/S95IIim:        /usr/sbin/ndd -set /dev/tcp tcp_extra_priv_ports_add 9010

It was similar in the other servers

crontab -l | grep -w ndd

had no results.
Looks like we should switch the cables and the port in the switch?

your initial sample showed you have /dev/ce0 --- edit /kernel/drv/bge0.conf and set everything to 0 except for *1000fdx_cap ... if you cannot reboot your server, reset all the other capabilities off with ndd -set as listed below ... if that still does not work, have your network folks set to 1000/full the actual port speed and duplex on the switch port the affected server's cable connects to ... changing the cable will not do anything unless somebody actually dropped something on it when they changed the switches ...

ndd -set /dev/bge0 adv_1000fdx_cap 1
ndd -set /dev/bge0 adv_1000hdx_cap 0
ndd -set /dev/bge0 adv_autoneg_cap 0
ndd -set /dev/bge0 autoneg_cap 0
ndd -set /dev/bge0 1000fdx_cap 1

Yes, that was what I meant. I meant bge0 instead of ce0.
My question still stands as to why the interface fails to auto negotiate to the allocated speeds.
The second interface beg1 is connected to the same switch and is running at full duplex 1000 speed.
But in the beg0 when I force switch to 1000 speeds full duplex the link goes down. Now I am believed to think that it is because of the switch.
I read some where that there is some problem with the Cisco switches and bge interfaces, that is causing the port to act this way.
Now I am thinking of switching the port to a different port in the same switch.
I don't know anything about switches nor do I know what to look for in a switch.
I guess changing to a different switch port is my best bet.

Identify if it is a switch port issue or server issue first. If you have one server (A) running at the correct 1gb, and the other server(B) now running at 100mb, at the switch, swap server (A) cable to the switch port of server (B), and plug server(B) into the switch port server (A) was using. If it is a switch port miss-config, the problem will now move to the other server. If the same server still has the same issue, then you have at least eliminated it is a switch port issue.

The issue has been resolved now. Apparently it was because of the configurations of the switch. I am not familiar with the switches and had been asking someone who was familiar with the switches to have a look. He did and changed the configurations on the switch. Speeds back up to 1000 Full Duplex now. Thank you guys for the continuous inputs.

1 Like