Tcpdump on many machines from single script

Hi all, new to the forum and rusty with my scripting. I am trying to put together a quick and dirty script that will kickoff a tcpdump on multiple machines. Then, another script that will reach out (at a later time) to stop the processes and retrieve the data. It seems fairly easy conceptually and will most likely consist of repeating lines with different username@<ip addresses> so I am trying to get just one to work for now.
Digging around the interweb has simply confused me and I could really use some help.

The usernames for the machines all have passwords (sorry, using keys is not an option) so I was looking at using expect like:

#!/usr/bin/expect -f
spawn ssh <username>@<ip_address> "/usr/sbin/tcpdump -i any -w filename.dump &"
expect "assword:"
send "<password>\r"

... then repeat for another server, leaving tcpdump running on each box when I'm done

Obviously, this does not work or I wouldn't be here. In this case, the script runs but when I look on the server, tcpdump is not running. I had tried using "interact" after the password and removing the & but that just left my script hanging and not moving on to kickoff the next tcpdump.

Any suggestions/corrections would be appreciated

The problem is not your script but the way UNIX is working: when a process starts it is assigned a terminal - usually the one it was started at. Once this terminal goes away the process ist terminated too. Now, you log on to a system by using ssh . That you do it from a script doesn't matter at all. Inside this session you start a program - tcpdump - and then kill the session. This way, the terminal ceases to exist and therefore the program is terminated too.

You probably thought that you prevented that by sending the process to background, but this is not the case. What you need (in addition to sending it to background) is the nohup keyword. "nohup" is short for "no termination on hangup" and it prevents exactly that behavior: the process will not be terminated once the terminal goes away (the session "hangs up" - this is from the time when sessions were serial lines and mostly dial-up).

If you do it with "interact" your script doesn't hang at all: it is executing the "ssh"-command but since this is never terminated it will never get to the next command. Therefore it seems to hang.

Still i'd like to suggest that you forego the whole business with expect . Instead of writing passwords in clear-text into your script (it doesn't matter if they come from a file or your scripts text directly - clear-text is clear-text and whatever encryption you will use the script will have to decrypt it automatically) you should exchange ssh-keys for the system-/user-combinations you want to process and use these. Then you can just build a textfile with the systems and users and your script will be a simple loop, like this:

# cat /input/file
user1@systemA
user2@systemB
user3@systemC
#! /bin/ksh

while read LINE ; do
     ssh "$LINE" "command"
done < /input/file

exit 0

One last thing: if you use nohup you should expressly redirect all possible output of the process, e.g.

nohup /some/command >normal.log 2>/dev/null &

because otherwise any output would be sent by mail (!) to either the user or root internally. You don't want that.

I hope this helps.

bakunin

1 Like

One correction, by default ssh reads from stdin and competes with the read command.
Quick fix: ssh -n...