Ssh remote command doesn't work from script file

I have 10 application servers in a distributed architecture generating their own application logs. Each server has application utility to continuously tail the log. for example following command follows tails and follows new logfiles as they are generated

server1$ logutility logtype

When I run the same command from using ssh from a terminal window on a remote server using a loop for all servers that also works fine

remoteserver $ for host in server1 server2 ..... server10
do
ssh $host "logutility logtype" >> logfile &
done

Above works fine as long as my terminal is open but stops when I close my terminal. So I put the above in an executable scriptfile to run with nohop option

#!/usr/bin/ksh
for host in server1 server2 ..... server10
do
ssh $host "logutility logtype" >> logfile &
done

and then I run the scriptfile as follows

remoteserver$ nohup scriptfile &

However, those ssh commands terminate immediately.

This is strange problem that I am unable to understand. Why does ssh command continue to run when run from terminal but terminates immediately when run from script. I would like help in understanding this behavior and also what needs to be done so that I can run that command remotely and save collective output of all 10 servers to a file on remoteserver.

When run in the background ssh requires the -n option.

ssh -n servername 'command goes here ' &
3 Likes

Thankyou. I wasn't familiar with -n option, I did modify my scriptfile to use -n option but even after that the script doen't continue to run, I am

$ nohup scriptfile &
[1]     7160
$ Sending output to nohup.out

and as soon I press enter key I get the following

[1] +  Done                    nohup scriptfile &

I also checked ps -eaf | grep and didn't see any ssh command running to those servers

It is hard to tell what your script is doing - it will get a SIGTTOU or the like if it tries to write to stdout or stderr - which will cause what you see.

Let's start with ultra-basics:

Execute each code snippet from the command line:
someservername = a remote server

ssh -n someservername 'echo awake && /usr/bin/sleep 5' &

Next set of snippets:

ssh -n someservername 'echo awake && /usr/bin/sleep 5 ' > t.lis 

ssh -n someservername ' echo awake && /usr/bin/sleep 5' > t.lis &
wait

I hope you see what I'm referring to.

I would write your code like this to run everything at the same time (up to say 10 at once):

#!/bin/ksh
cnt=0
for sname in server1 server2 server3 server4 .... server99  
do
  cnt=$(( $cnt + 1 ))
  ssh $server 'my commands go here' >> logfile &
  [  $(( $cnt % 10  )) -eq 0 ]  && wait  # only run a max of 10 processes at once, you can change this
done
wait        # in case where number of server is not ( 0 == servercount % 10 )

Plus, you have to redirect stderr in the 'command goes here ' part to some useful place/file. Plus you should wait for child processes, not simply logoff.

I tested and those snippets work just fine. I ran ssh with -v -v -v option to capture debugging information.

Here is debug for ssh that kept running in the background.

debug1: ssh_session2_setup: id 0
debug1: Sending command: my command here
debug1: channel request 0: exec
debug2: callback done
debug1: channel 0: open confirm rwindow 0 rmax 32768
debug2: channel 0: rcvd adjust 131072
debug1: channel 0: read<=0 rfd 5 len 0
debug1: channel 0: read failed
debug1: channel 0: close_read
debug1: channel 0: input open -> drain
debug1: channel 0: ibuf empty
debug1: channel 0: send eof
debug1: channel 0: input drain -> closed

Here is debug where ssh session terminates after connecting.

debug1: ssh_session2_setup: id 0
debug1: Sending command: my command here
debug1: channel request 0: exec
debug2: callback done
debug1: channel 0: open confirm rwindow 0 rmax 32768
debug2: channel 0: rcvd adjust 2097152
debug1: channel 0: read<=0 rfd 5 len 0
debug1: channel 0: read failed
debug1: channel 0: close_read
debug1: channel 0: input open -> drain
debug1: channel 0: ibuf empty
debug1: channel 0: send eof
debug1: channel 0: input drain -> closed
debug1: channel 0: rcvd eof
debug1: channel 0: output open -> drain
debug1: channel 0: obuf empty
debug1: channel 0: close_write
debug1: channel 0: output drain -> closed
debug1: client_input_channel_req: channel 0 rtype exit-status reply 0
debug1: channel 0: rcvd close
debug3: channel 0: will not send data after close
debug1: channel 0: almost dead
debug1: channel 0: gc: notify user
debug1: channel 0: gc: user detached
debug1: channel 0: send close
debug1: channel 0: is dead
debug1: channel 0: garbage collecting
debug1: channel_free: channel 0: client-session, nchannels 1
debug3: channel_free: status: The following connections are open:
  #0 client-session (t4 r0 i3/0 o3/0 fd -1/-1)

debug3: channel_close_fds: channel 0: r -1 w -1 e 7
debug1: fd 1 clearing O_NONBLOCK
debug1: Transferred: stdin 0, stdout 0, stderr 0 bytes in 0.2 seconds
debug1: Bytes per second: stdin 0.0, stdout 0.0, stderr 0.0
debug1: Exit status 0

Please post the exact code that you are trying to background.

Thanks.

try doing it like this:

for x in server1 server2 .. server10
do
ssh $x "<command>" </dev/null > file.log
done

i would test with a command that wont hard the system.

/home/insta cat hostname.master|while read line
> do
> ssh "$line" "echo hi" < /dev/null > $line.log
> done

I just ran this on a few systems and it worked.

1 Like

The x and q options are not required but can speed things up.

ssh -qnx ...
1 Like

Thanks to all that took time to reply. My script is working now. There was a limit on number of ssh sessions on remote server and so sessions from my script were getting closed.