FTP run from shell script gives slow transfer rates

Hey everybody, this is my first post so be gentle. I have two Sun 5220's running Solaris 10 that are directly connected with a cross-over cable at Gig. One of these boxes is my production Oracle server which generates a 50GB dump file every evening at 10:50. The other Solaris is a devolopment server that is identical to the production. While they are attached to the network using IPMP and two of the NICs, I have 1 extra NIC dedicated to this private network between boxes. I am trying to script the FTP transfer between servers.

Here is the issue. When I run FTP interactively from Server A (prod) and do a put, I can transfer the file in 35 minutes or so. However when i run the same FTP commands in a shell script takes 5-7 hours to complete. I have thought maybe this is a priority issue and have looked at NICE, but I am kind of scared to mess with processor priority on my production database server.

Here is my script with then names changed to protect the innocent. I run this script from Server A (prod). It is pretty straight forward, so I really don't think their is a problem with the code. Something is causing it run 10 times slower when run from a script. Thanks in advance to anyone who can help me here.

#!/bin/sh
HOST="192.168.1.2"
USER="user"
PASSWD="passwd"

cd /u14/exp

ftp -in $HOST <<END_FTP
quote USER $USER
quote PASS $PASSWD
cd /u11/exp
put file.dmp
quit
END_FTP
EOF

I don't see any problem except the last line....If that's not the problem hopely someone else can shed some lights on this.

Regards

Maybe I should have posted this in the Solaris forum. Like I said I don't think it is the script itself, but rather how Solaris is prioritizing the ftp process when called from a shell script. I have been stuck for two days on this and have worn out the Google in search of an answer. I bet it is something really simple, but it is kicking my tail right now.

everything looks kosher to me.
you cat try adding 'set -x' in your script to see what's actually taking place - like is your script's "ftp" the same as your command line - strange things do happen.
Also you can add 'hash' to your ftp session to see the progress of the transfer. And do the same to your command line - then may be compare (first just visually) the rates.The 'rates' might be the same, but the script may be hanging at the end somehow - a wild guess...

I assume both things (CLI and script) use the same FTP port (22)...

Others may chime in as well.

When I run the script or run ftp manually, I have a PuTTY session open to the destination server and directory so that I can see the progress. From the script it takes approximately 6 minutes to transfer 1GB of data. When run interactively 1GB transfers in about 40 seconds.

Everything should be the same. FTP works from the script, it just behaves like it has been throttled. One more thing I thought of, our production server runs a script every 3 minutes during the first 10 days of the month that polls for ftp connections where data files are being submitted by our customers. I wonder if the previous admin (who did not document anything :mad:) added something to the config which is throttling the bandwidth on non-interactive ftp connections?

What exactly does set-x do? I am pretty green, but I am catching on fast. I was a Windoze AD and Exchange admin for 10 years before I got this job and have been in the UNIX world for about 6 months.

Not sure if it's possible, but then again I have not done much with the ftpd myself.
One thing from 'man ftpd' (on Solaris) kinda caught my eye:

     -X    Write the output from the -i and  -o  options  to  the
           syslogd(1M)  file  instead  of xferlog(4). This allows
           the collection of output from  several  hosts  on  one
           central  loghost.  You  can  override  the  -X  option
           through use of the ftpaccess(4) file.

maybe there's an extensive logging going on with the '-i' option that you have in your script and don't have with the cli.

'set -x' will output ALL the commends from the script as they're to be executed - this is a shell's debugging mechanism.

As an alternative, you could use scp (secure copy) rather than ftp to transfer the files. SCP does not require USER or PASS arguments, which may be causing part of your problem.

I use this to copy between production and DR. It is secure, and it is encrypted.

There is some background configuration for SSH that you will need to do beforehand, but I believe you can find that fairly easily without 'wearing out' Google :stuck_out_tongue:

Thanks for your help. I am about to move the private network from the production server to another 5220 I have and see if I expereince the same issue with a different server in the loop. Basically I just want to know if ftp is slower from a shell script than interactive, and is this normal in a Solaris environment? I don't think it is, but I don't know for a fact.

You know I was going to move SCP anyway at some point to keep from having scripts in plain text containing vital passwords. The previous admin did not care for security. I might as well quit beating my head against the wall and see what happens with SCP.