Recursive FTP -- here at last.

Hi Perderabo,
Excellent script.

I found one issue though:
When the directory name contains a space, some recursion error occurs and the script goes into an endless loop.

eg: "Directory with spaces"

unix directories should NEVER have spaces.

but try passing in a dir w/ spaces sourounded w/ single quotes.

I just replaced the old script with a new one. It supports spaces in dirertory names as well as it supports spaces file names. I don't expect that leading or trailing spaces will work. And I only tested cases with a single space between words.

Hi I am trying to use the script and I getting the following error:

FATAL ERROR: timed out waiting for:
/tmp/HardFeed.ok.2540.1

I am using the -rdvs parameters

Thanks!

Fernando

HardFeed is timing out as it tries to read the remote directory. Is this a very large remote directory? Or a very busy ftp server? You might try adjusting the line:
OPT_MAXWAIT=15
to a larger value. But if the directory has tens of thousands of files something else could go wrong. I have never tested it on a super large directory.

Could you please explain how these 2 scripts work as I am getting the following error when trying to execute HardFeed Part 2

HardFeed.sh[369]: OPT_CMDS>i: The specified number is not valid for this command.

I am passing the ftpserver, userid of the server and directory path on the remote server where the file is located at. I appreciate your help. Thanks

They aren't two scripts. It is one long script. It was too long to post in a single post. Put the pieces together and run the script.

Thank you for immediate response. It worked and copied the files, But does not retain the date and time of the file which is on the remote server. Am I missing something.... Thanks,

Um.. you're right, it doesn't. *shrug*

I didn't see anywhere, but I get the impression from the script that the FTP connection stays open until you send a print -p bye command and the only commands that actually go to the FTP are print -p commands. Is this correct?

Thanks, Kevin

Hi, this script looks pretty cool, but I am after something to recursively PUT files, rather than to recursively GET them. I have not had a close look at the script yet, but expect it to be pretty complex to understand.

I have just purchased a nas device for backup purposes. it has a ftp server, and I would like to send all the files from my small linux server up to it for backup. I cannot use tar to make a large file and send up one file only because there is a 4 gig file limit on the nas device.

I realise that some time has passed since the script was posted, but I hope someone can help.

Thanks in advance, Robert

ncftp can also do recursive get and put

Recursive put is trivial anyway. Read our ftp scripting faq article.

i give you credit for what you have done. Good job.
On the other hand, there are ppl who might not want to "re-invent the wheel". So using tools already built in with such functions may be their choice of solving their problem..anyway, to each his own.

when trying to use the script, I get this error:

hardfeed.ksh[74]: things: subscript out of range
WARNING: badly formatted line in directory listing for:
directory_name
attempting outside-to-inside scan

hardfeed.ksh[74]: things: subscript out of range
WARNING: badly formatted line in directory listing for:
file_name
attempting outside-to-inside scan

when I run it this way:

hardfeed -drv server username directory

Any idea what I am doing wrong?

Thanks in advance!!

Rippey

rsync is also available for windows aswell as novel and it can also use ssh instead of rsh.
But sometimes due to some firewall rules you can only use FTP, as I found with my last task I was given at work, so I wrote my own autoftp script. I will check yours out and might use that instead :slight_smile:

Excellent work im now migrating a big site like 600mb or so, i see that it has trouble with directories that contain subdirectories it gives an error :frowning: but the rest it`s great you might want to check on that thanks a lot :smiley:

I have tested it extensively including with subdirectories over ten levels deep. The script works fine for me. I have used it many times to move large directory structures around.

hello perderabo,

i'm tying to use your script with -x "ntrans \%3A \:" to convert the hex form of the ":" back to "normal". your script seems to do a check on filenames and aborts...since i'm very unexperienced with scripting: is there a way to use hardfeed with the option ntrans ??
i have been looking for a tool like hardfeed for quite a while and would really like to use...
could you please help me ?

thank you very much
ingo

First of all, ntrans doesn't work like that. "ntrans \%3A \:" will convert % to : and delete any occurence of 3 and A. That might collapse into what you want if the only 3 and A characters happen to always follow a : but it seems like a dangerous way to do things. From the man page:

And no, hardfeed is not going to work with ntrans. hardfeed needs to send a string like "get $file" to the ftp coprocess. Then it waits until the file arrives by testing if $file exists. Once the file arrives, it may optionally do a "chmod $mode $file" to transfer the mode. Another option is retrieve the file only if the remote file is newer than the local file. All of this stuff is based on file names.

You could use hardfeed to retrieve the files, then run another script to rename them. There are many example of renaming scripts on this site and you can find them with our search function. But remember that if you rename the files, you won't be able to run hardfeed again to just retrieve changed files. hardfeed will think that any renamed files are missing.