FTP Connection die out

Hi,

I will ftp aroung 80 files after connecting to an FTP Server. But after 2 minutes of connection, it is timed out and connection is dying. Server had a 2 minute connection timeout if connection is idle. But my question, Isn't tranfering files not considered as an activity. Is the connection considered idle when files are transferred.

Also in the code, how can i make the connection active for more than 2 mintues. i wish to put the connection active until all my files are transferred. How can i do this. Please advise. Can somebody please respond to this.

---------- Post updated at 10:27 AM ---------- Previous update was at 12:14 AM ----------

Can somebdoy please respond to this.

The problem is on the remote server. You know those numbers that appear when ftp does things? We need the number and message. I am guessing you are getting something like 426.

See List of FTP server return codes - Wikipedia, the free encyclopedia

If 426 is the case chat with the admin for the remote box. Or simply make one connection for each file, if the sysadmin cannot change it.

Thanks for the reply.
But i still had a question. Is transferring the file to Remote server is not considered as an activity on remote server? We transfer files using PUT command. So isn't put command an activity on remote server.

Are you able to post the Operating System of the remote server and the "ftpd" command line from the remote server?
If the remote "ftpd" server has parameter "-T 120" rather than "-t 120" this could cause the effect you see.

FTP is so old, it uses two connections, even in passive more, so you may lose the control connection during the transfer of a long file or command. Consider moving the files in an zip, or using a newer protocol like ssh/scp, rcp that use one connection. scp has optional gzip compression and encryption for security at the cost of cpu, especially at the sending end. Both scp and rcp have a subtree recursion mode -r that packs everything in cpio internally. Sometime I send big files by multiple scp -C parallel commands, as the compression and encryption leave some net bandwidth unused. Sometimes, I send data compressed using the faster 'compress' (LZ not LZW) with ssh or rsh, like this pull:

$ rsh -n source_host "compress <remote_file" | uncompress >local_file

The network may not be slow enough to justify the slower gzip compression even with usually twice as small an output. Similarly, you can collect files using cpio as archiver to stdout and pipe that to a compression tool and then through rsh/ssh to the other end to the matching uncompression tool and a cpio to unarchive the stream. I used this up update a whole subtree in Hong Kong over a 56K WAN from NJ, back in the day. cpio has a do not write to files not older mode, too.

As the O/P omitted to mention the Operating System(s), possible refinement is irrelevant.
There are established techniques for dealing with a connection break during a ftp transfer.
It would help if the O/P posted what was expected, what was typed, and what happened.

So.. here I am on a Lenny box, not my usual fare, mind you, but it has it's upsides over my usual CentOS for a very specific application I use a bit, namely it WORKS! (ImageMagick).

Anyway.. that's a digression..

I need to send 20k files spread across a couple of directories..

wput a single file at a time kills the remote server's connection, command line ftp dies trying to send 524MB at about 73. The only way I can manage at the moment is a single file at a time, and don't start more than one.

Pretty straight forward, but if I fork bg, and let it fly, it flies, and all transfers die. The linear approach, using find to compile the list and a do loop to send them all doesn't seem to work either, I end up with skipped messages and no files on the remote server.

I need an intelligent uploader that's CLI based so I can fire it from 5 different places to 5 different places.. something that validates every file transfer and retries all night with a varying and maybe long window :wink:

Peter

Peter

@Peter
Please start a new thread, possibly containing a description of the target server and sample code from the relevant parts of the scripts. The volume of data would help and the speed of the connection between the two servers.
In general "ftp" is a last resort and used when copying files between incompatible servers or where the remote server only offers "ftp".