I have a situation where I want to sftp the files to third part server and I came up with below script where in I am using "expect" utility for sftping the files.
.....................................................
#!/usr/local/bin/expect --
#
set timeout 120
set password [lindex $argv 0]
set host [lindex $argv 1]
set username [lindex $argv 2]
set filename [lindex $argv 3]
spawn sftp $username@$host
expect -nocase password: {send "$password\r"}
expect "sftp>"
send "mput $filename\n"
expect "sftp>"
sleep 2
send "exit \n"
-----------------------------------------
Here whats happens now...
every time there is a file my java code calls this script and sftps the files to remote server. As I have realized that trafic of files going out to the remote sever is very large and it is very expensive to open a new connection and sftp a small file and then closing a connection ; again opening a connection which is causing lots of performance issues.
Also I cannot keep connection active forever....
What I would like to happen is keep the connection active for lets see 1 hrs or some defined time and then keep using the same connection to sftping the files and whenever there is no connection avaliable make a new connection.
Is any one came across this kind of situation or any body can give more inputs on this one.
My real requirement is to use sftp as both parties have agreed to use both we are doing EDI transactions. Is there any thing which I can add to the script to make it continous....
even if i use scp i need to open a connection and close it....so I need a solution other than using scp iinstead sftp
Actually I copied a incorrect thing I am sftping the files to end system. so it is MPUT so I have control with me for the files.
What I am thinking is whenever files are ready to be sftp'ed they will call this script and check if connection is active and if yes then do the sftp else open a new connection and send the files.... I agree there would be problems because of multithreading where connection can be a seamaphor or some thing however if I have some thing which I can modify and work on would be greate....
Today I did profiling of this processes what I have seen is at times I get 40 files in a min to do sftping... so it very bad that I open & close connections 40 times in a single min....
How are they dealing with the files at the remote end, for example do they just look in a directory, if so how do they know the file is complete?
I can see what you are doing, with ftp/sftp what I suggest is you put the file up in one directory, then move it to another directory on the server once it's complete.
put file in-transit/file
move in-transit/file ready/file
and rather than use mput, use put explicitly so you know what you are sending, and once you have sent the file, move it to a "sent" directory.
How are you going to handle the case where you start to send a file and the comms link drops half way through?
How are you going to avoid overwriting files on the server?