I cant resolve rsh problem

Does it give information but I wanted to share

Server C to B --- rsh serverBIP df -h

Server B var/log/secure
pam_unix(rsh:session): session opened for user root by (uid=0)
pam_unix(rsh:session): session closed for user root

Server A to B rsh serverBIP df -h
var/log/secure
nothing goes in the log

Hello,

In that case, the network ports are definitely the main thing to focus on. As mentioned, this error only ever appears when the connection from the server to the client on ports 1016 through 1022 fails. Given that the proper binary is in place, this can only mean that this connection itself is failing. I'd check that there is nothing else running on these ports, and that you are absolutely sure you can connect to them between these two servers.

Another random thought: do you perhaps have SELinux still enabled on Server A ? If so, that might be getting in the way of setting up the callback connections on the privileged ports. That's just an idea off the top of my head, I have no suitable system I could test this on, but it's at least another possible avenue of investigation.

Hello,
Selinux disabled all server (I check again)

And is anything using TCP ports 1016 through 1022 on these servers ?

I checked with this command

netstat -tulpn | grep LISTEN

Not return 1016 - 1022 ports but two xinetd services list

i think problem is server B.
We have activated iptables on this server, it is now closed, could it be because of it?

i'm thinking of restarting server B i don't know if it will help but

Ultimately, you'll need to ensure that ports 1016 through 1022 can communicate between both the client and the server, as different ports in this sub-range will be used at different stages of the connection on both sides when using rsh.

For testing purposes, if you are running iptables on Server B, the easiest thing is just to put in a rule allowing all IP traffic from Server A to Server B, and vice versa (being sure to add this rule to the top of the relevant chains, before any DROP rules). Alternatively just temporarily stop iptables or flush out all the loaded rules. Either way, once this is done, re-try your rsh connection.

If it now works, then iptables is indeed to blame, and you'll have to tweak your iptables rules to allow only the communication you need to let through for rsh to work. If it still doesn't work even with global "allow all IP" firewall rules in place in both directions or with all iptables rules flushed out, then iptables is not to blame, and something else is going on.

As for a reboot: normally you shouldn't really have to reboot a server to resolve an issue such as this, but you could always try it if you like. To me this still seems like a network issue most likely, but all you can do is keep trying until you get to the bottom of it one way or the other.

Thanks for your support,

uninstall rsh rpm and I reinstalled but it didn't fix

I can't think of anything anymore.

Consider using ssh instead.

1 Like

This is certainly the best answer. Overall the time spent investigating this RSH issue would probably have been better-spent switching over to SSH. Presumably you are using RSH in scripts to remotely execute commands. SSH was originally designed so that it could be a pretty much drop-in replacement for RSH in these usage cases, and the ssh client accepts the same flags and has the same syntax as rsh. You can set up key-based authentication so that it works without needing to prompt for a password, which again will help with scripting. And we are dealing with fairly modern systems here (ultimately the Oracle equivalents of RHEL 6.x and 8.x), so they are certainly capable of switching to SSH. And if the servers are on the same sub-net, as they have been previously established to be, then there's no firewall changes or network changes needed for you to replace RSH with SSH either.

Alternatively if you have access to Oracle support you could engage them, and see if they can get to the bottom of things. But unfortunately I'm not sure what else to suggest either regarding this particular problem. The only causes I can find for that error are network-related, and you appear to have ruled those out, so I don't really know where to go next. You could perhaps try attaching a strace to the rsh client and a forking strace to xinetd on the server-side and see what you see. But again, ultimately you're spending time here trying to get technology from the 1980s working, when there is a modern and script-and-syntax-compatible replacement that you can swap it out for.

If I do think of anything else I'll reply again, but if it were me, I'd certainly be replacing these with SSH, as per my original response to you. It's a modern, actively-maintained and far more secure solution, and is available to be installed via yum on your 6.x and 8.x boxes instantly.

1 Like

Please read

There is really a call-back for the stderr channel! No firewall likes that, containers might need an extra parameter, ... :stuck_out_tongue_closed_eyes:
Maybe your rsh has a -e option for "no stderr"?

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.