UNIX - SCP File Transfer

Hi,
How do i know if the files are transferred succesfully when i use SCP to transfer files between 2 servers.

One more is i am trying to send all the files in a single shot by using * to save the connection time. So can i know when the scp breakes in the middle

scp $sourcepath/* user@\$destserver:\$destpath

Do you have anything so far other than that line?

You can test for scp's successful completion by looking to see if the exit status is 0 once it finishes. Here's an example:

#!/bin/bash
echo starting transfer
scp $sourcepath/* user@\$destserver:\$destpath >> /tmp/log.$$
OUT=$?
if [ $OUT = 0 ] ;then
echo transfer successful
else
echo oh no, ftp transfer failed somehow. check log file in tmp for details
fi

Are your scp transfers typically failing? How large are the files you are transferring?

You could also have the script check for files before it transfers, parse the log to see if it needs to retry, email the log files elsewhere, etc.

Please don't perpetrate this silly "test $?" idiom. if already by design examines $?

if scp $sourcepath/* user@\$destserver:\$destpath >>/tmp/log.$$
then
  echo oh yes >&2
else
  echo "oh no ($?)" >&2
  tail /tmp/log.$$ >&2
fi

What's with the backslashes in \$destserver:\$destpath?

Nothing bash-specific here, by the way, so might as well use good ol' /bin/sh (it's good for you).

Thanks a lot for era and tderscheid.

I used back slashes to nullify.

can i delete each file when the copy is done?
If i use SCP by looping i know it can be done but it takes lot of time if i have more files. If i use the * to transfer multiple files it goes in quickly as it dont have to connect every time to the server.

Well, are you generating all the files once, then moving all the files, then deleting all the files, with nothing touching the directory during the transfer? If so, then if your scp /$sourcepath/* has finished with exit code 0, it's reporting success, so the files got to the target machine. As long as nothing has generated more source files, then you could rm /$sourcepath/* and be happy, or depending on your available space, tar them and save them for a week.

How much time are you losing during the connection process? Aren't you ultimately going to automate this and just drink coffee while the move script runs from crontab and then sends you an email of the log when it's done? Is the time you lose actually a critical factor?

If you're having to move, daily, a million tiny files from server A to server B, something else is wrong with that picture. Would server B accept a tar.gz of each group of files you want to send?

I am sure Era can find several other angles to improve this. A little more information about the timeline of file creation might be useful.

Thank you for the help...
I may be moving around 5000 files and i cannot tar them... i need to send individually...

I am benefiting around 3 hrs if i send then at a time using the * and if i send then individually its taking that 3-4 hrs more for the 5000 files.

The way I would approach this if the contents of the source directory were likely to change while the copy process is running is to create a shell function to copy one individual file and then to delete it if the copy was successful. I would then start the process by creating a list of all the current files, passing them one by one to the function. This will ensure that only files that were copied correctly get deleted. I know that this will take extra time and resources as each individual file will require a new connection to be set up, but it is, in my opinion the safest way. Of course you will need to set up key authorisation rather than password auth as that would be tedious in the extreme for 5000 files.

Okay, how about doing a move of 100 files at a time into a /sourcepath/tmp directory, and doing scp * on the directory? If that succeeds with status 0, delete/archive the files and go get a new batch. repeat until the main directory is empty. That way, you only have to connect 50 times instead of 5000.

Are your connections actually failing?

Or simply use rsync instead.

which way is good transfer huge files?
i have some 100 files with 10 MB each
tar all the files and then send huge file or
send each files seperatly?