bash script - sftpbatchfile - stop on failure

Hello all,

I am currently writing a script to send files to a server over sftp. When the sftp put command succeeds it wil preform a local move from within the sftp shell to another folder (this is done so when the script is rerun no doubles will be sent).
therefore i had following sollution

for FILE in `ls -tr $tbtsftp`; do
    echo "put "$FILE
    echo "cd "$remotepath_VAN >> $commandlist
    echo "put "$FILE >> $commandlist
    echo "!echo STOP > "$SFTPCHECK_PUT_RM >> $commandlist
    echo "!rm -f "$FILE
    echo "!rm -f "$FILE >> $commandlist
    echo "!echo OK > "$SFTPCHECK_PUT_RM >> $commandlist
done

echo
echo "Executing commandlist over sftp"
echo    
sftp $sftplogon@$sftpserver -b $commandlist 2>&1

so basically first i put the file over sftp and then create a checkfile (for a following script) followed by the remove (again the checkfile will be overwritten here)
This would work perfectly if the sftpshell (or even this whole script for that matter) would stop and go in error on a failed !rm or put. This is not the case however. After the !rm fails (just tested) it just continues to put the next file.
the whole idea is that the following script would check the content of the generated checkfile and based on its content it will send us a message saying to check manually because the local remove failed.
Also i was ordered not to make a separate connection for each file (so the sftp batch is in fact nescessary)

I hope someone can help because this problem is driving me crazy.
Also i already tried the set -e option at the beginning of this script but this doesnt appear to work for an sftp batch.

thx in advance.

You could always have the tests and control external. If you have SFTP working, can you also SSH login to the same server as that account? Perhaps a controlling local script could put a single file with SFTP, then an audit and rename could be done by an SSH connection where you can then recognise errors.

Does that idea help?

Robin
Liverpool/Blackburn
UK

sftp rm should abort on fail, according to the man page:

That would indeed work, except the only problem i have is that the server we send this to is a 3rth party server to which only sftp connection from our side is allowed.
On top of that the directory where we put the files (the only one we can put the files) has some service running on it that picks up the files and put them somewhere else. This pickup happens as soon as i put the file there. So i really need to be able to preform the check from my side only.

Another idea i have is to use

echo "!rm -f "$FILE >> $commandlist
echo "!echo "$?" > "$SFTPCHECK_PUT_RM >> $commandlist

but i am currently testing this :slight_smile:

thx for the response but no solution yet.

---------- Post updated at 03:05 PM ---------- Previous update was at 02:50 PM ----------

i know. It is what I would have expected, only it continues as you can see in the output after i changed the rights on the files
Executing commandlist over sftp

sftp> 
sftp> cd /home/JOBANT/VAN_SIM/
sftp> put 5414488000301.5414488000004.A00930185.CONTRL.edi
Uploading 5414488000301.5414488000004.A00930185.CONTRL.edi to  /home/JOBANT/VAN_SIM/5414488000301.5414488000004.A00930185.CONTRL.edi
sftp> !rm -f 5414488000301.5414488000004.A00930185.CONTRL.edi
rm: cannot remove `5414488000301.5414488000004.A00930185.CONTRL.edi': Permission denied
Shell exited with status 1

!rm is excuting a shell command though, is it not? As opposed to sftp's builtin.

Theory only, haven't tried it myself :).

!rm file

is what you do when trying to remove a local file while in an sftp shell. (hope i'm saying this right)

Yes - I was confusing my local and remotes...

EDIT: Although it would work (abort) if you ran it the other way :).

The manual pages refer to internal SFTP commands only. You are escaping to the shell on your local machine, so SFTP doesn't care what happens to it. This will try to delete the local file, which may be what you want, but you would be better closing the SFTP connection and then doing the remove of the local file in your shell script, where you can then test the return code.

Something like:-

$ cat sftp.cmnds
put localfile remotefile

$ cat myscript.sh
#!/bin/ksh
sftp a@b -b sftp.cmnds
RC=$?
if [ $RC -ne 0 ]
then
   ## Handle SFTP failure ##
else
   rm localfile
   RC=$?
   if [ $RC -ne 0 ]
   then
      ## Handle local file deletion error ##
   fi
fi

Does that help?

Robin
Liverpool/Blackburn
UK

This is what i first came up with, but there seems to be a problem when doing this.
Since it is a large amount of files the script makes a lot of different sftp connections. After an upgrade of the 3rth party server we are having many issues with sftp failures. Therefore i was charged to change the scripts so all files get sent over 1 sftp connection. (this way we avoid so many failures)
So bottom line, setting up a different connection for every file is out of the question.

(sorry i'm so difficult, but if it was easy i wouldn't have posted it :wink: ) :cool:

As I mentioned in an edit above, can you do a pull rather than a push? Then you could use sftp's internal rm, and it would abort on fail.

Ah. That rather shoots my suggestion out the water. I suppose you have already exhausted trying to find out why the SFTP doesn't always work.

You could have a retry loop to allow up to 5 failed attempts perhaps? Or, could you sleep for a second in between? I suppose that if there really are lots of files then that could add considerable time, so could you sleep for a second every 5 SFTPs? I suppose it depends on how frequently you get a failure.

Could it be a DNS/IP lookup issue at the remote server perhaps?

Okay, I'm probably waffling on about stuff you've already tried. Never mind.

I will keep thinking.

Robin
Liverpool/Blackburn
UK

that would indeed work, only as i mentioned before, the 3rd party server is a black box to us :slight_smile: (no possibility of getting scripts on it either :p)

---------- Post updated at 03:35 PM ---------- Previous update was at 03:33 PM ----------

Indeed as you said, we have given up trying to figure out why it fails (not much help from 3rd party either). Their final answer was, we do not see any problems on our side. :wall::wall::wall:

It would be so much nicer, however then you would need to change firewall settings to allow the other end to open a connection to your server and the potential risks there. I think you are right to dismiss SSH driving an SFTP back to you to do a get and rm as an option.

I see you have the usual "We're okay." type response. There must be a logging somewhere to say that the connection is refused, dropped, whatever. This is laziness on their side, especially if they made the change that broke it. I agree with your :wall::wall::wall:

No thoughts yet. Perhaps at 3am I will get an "A-ha!" moment.:eek:

Robin
Liverpool/Blackburn
UK

Just to clarify, sftp -b isn't aborting on a failed put either?

thats the weird thing, it is failing on a put. But not on any local command (preceeded by !)

I think that's expected behaviour - the built-ins abort, but shell can do whatever.

Your issue then is just that you don't know what (local) unlinks failed? Couldn't you create a list of the files transferred and just check which ones still exist?

Yes that was also my thought :slight_smile:
but my boss shot the idea for following reason.
how will you know when the echo file > smth has worked or not :s
answer is, you won't

he's kind of a nitpicker :slight_smile:

Do you need to?

You already know which files you're attempting to transfer ( ls -tr $tbtsftp > listoffiles.txt ). Can you check which files actually transferred via increasing sftp's log level and saving the output somewhere? (I'm not sure how much it actually outputs in batch mode)

Yes i know that, and i know i could be able to write a list directly after the transfer as follows

sftp> put $file
sftp> echo $file >> somecheckfile

but my boss won't allow me to do it like that since your suggestion is 2 much manual work when you have a list of 10000+ files
and my example is not 100% failsave (this is farfetched but if the echo ... fails in my example, you are going to send this file again next run (since it wont be in the list for files 2 be deleted)

for today i am quitting, time to get home.
i'll be back here tomorrow.
Any suggestions are welcome

I think you missed my point. I was suggesting something like:

ls -tr $tbtsftp > listoffiles.txt

for FILE in `cat listoffiles.txt`; do
    echo "cd "$remotepath_VAN >> $commandlist
    echo "put "$FILE >> $commandlist
done

echo
echo "Executing commandlist over sftp"
echo    
sftp -vvv $sftplogon@$sftpserver -b $commandlist 2>&1 > somelogfile.txt

And then compare listoffiles.txt (or $commandlist, for that matter) to somelogfile.txt using an awk script or whatever. No "!echo"s in the batchfile, or manually - somelogfile.txt is just SFTP's own "Uploading /tmp/x.sh to /tmp/nosuchdir/y.sh", "Couldn't get handle: No such file or directory", etc.

BUT - I'm not sure exactly what SFTP itself logs in batchmode, and I don't have any keyed machines to check with atm. So it might not be feasible.