SFTP Not Working With CRON

Hello -
I have a production stream that is scheduled with cron to run each Monday morning. The jobs in the stream perform tasks including
FTP get, load to a DB table, and report processing.

About a month ago I was directed to begin using sftp in these jobs and since then the jobs scheduled with cron no longer work. Each fails when attempting to connect via sftp to the remote server. If I run the
job from the command line of the owning account it will work fine.

I believe that the problem basically is that the cron owner does not have the priviledges necessary to perform the sftp connect.

I have tried the following steps to get this to work:

1) Created a cron schedule file (prodid_crontab.txt) under the job owner's account (prodid).
2) Directed the cron daemon to use the prodid_crontab.txt schedule file by issuing the command:

crontab < prodid_crontab.txt

3) Tested this and verified that the cron is using prodid_crontab.txt

The job still failed to establish an sftp connection. So I tried granting execute on the prodid_crontab.txt file just to make sure, and the sftp job step continues to fail.

All replies are creatly appreciated.
Patrick

maybe you start showing us how your crontab -l output looks like at the moment ?

Regards
zxmaus

Did you use a passphrase when you generated the keys for the account that is running sftp?

Remember, cron runs things with a very minimal PATH. You may want to use absolute paths for any executables you call, or set a better PATH yourself, or perhaps source /etc/profile

Hi -

Thank you for the replies. I want to mention that I sudo'd to the job's home directory for everything that I've tried. I do not have the authority to login directly to the prodid account.

Here is my crontab schedule:

32 11 * * 1 /home/sadeadm/data/LMC/accept/WeeklyLabor/sql_loader/sftp_testing/sftp_testing.job abusmgt /appl/oracle1/product/10.2.0.4 /appl/oracle1

As it is set up, The sftp command does not currently require a passphrase. I don't have a good knowledge of how the server were modified to enable sftp.

I also tried using an absolute path (ie: crontab < /home/prodid/prodid_crontab.txt) but the sftp command is still unable to connect to the remote system when cron executes it.

Thanks,
Patrick

Can we see the contents of /home/sadeadm/data/LMC/accept/WeeklyLabor/sql_loader/sftp_testing/sftp_testing.job ? Also, are you asked for a password for the remote account when you run the command interactively?

Sorry for the delay in my response. There was a amall typo in my last post so I'll recap where I'm at with this problem.

The account owning the sftp command is prodid. I sudo su - prodid to the prodid account to modify and execute scripts. In the directory /home/prodid I executed:

crontab < prodid_crontab.txt

And I verified that cron is using my schedule file (prodid_crontab.txt). Here is the prodid_crontab.txt file:

32 11 * * 1 /home/prodid/data/LMC/accept/WeeklyLabor/sql_loader/sftp_testing/sftp_testing.job abusmgt /appl/oracle1/product/10.2.0.4 /appl/oracle1
 
And here is the file sftp_testing.job that I want cron to execute:
 
#!/bin/sh
##########################################################################
##
## Declarations.
##
## HOMEDIR=/home/prodid/data/LMC/prod/WeeklyLabor
HOMEDIR=/home/prodid/data/LMC/accept/WeeklyLabor
DATADIR=${HOMEDIR}/sql_loader/sftp_testing/
LOGDIR=${DATADIR}/log_files
LOG=${LOGDIR}/sftp_testing.logfile
TODAY=`date "+DATE: %m/%d/%y TIME: %H:%M:%S %p"`
FILEDATE=`date "+%Y%m%d"`
DFILE=zc00799i_pch.out
##
## Process Passed-in Parameter Values
##
ORACLE_SID=$1
export ORACLE_SID
ORACLE_HOME=$2
export ORACLE_HOME
ORACLE_BASE=$3
export ORACLE_BASE
##
## Directory for sqlldr and sqlplus
##
BINDIR=${ORACLE_HOME}/bin
cd ${DATADIR}
echo SAP_NODE_FTP JOB STARTING.........
echo "SAP_NODE_FTP JOB STARTING........." > ${LOG}
if [ -f ${DFILE} ]
then
  rm ${DFILE}
fi
echo "ORACLE_SID:   " >> ${LOG}
echo ${ORACLE_SID} >> ${LOG}
echo "ORACLE_HOME:  " >> ${LOG}
echo ${ORACLE_HOME} >> ${LOG}
echo "ORACLE_BASE:  " >> ${LOG}
echo ${ORACLE_BASE} >> ${LOG}
echo " " >> ${LOG}
echo "#########" >> ${LOG}
echo "File ${DFILE} FTP Operation starting at: " >> ${LOG}
echo ${TODAY} >> ${LOG}
echo "#########" >> ${LOG}
echo " " >> ${LOG}
echo Files to be retrieved from Remote System: >> ${LOG}
echo ${DFILE} >> ${LOG}
echo Oracle Environment: >> ${LOG}
echo ${ORACLE_SID} >> ${LOG}
echo "Bin directory (BINDIR): " >> ${LOG}
echo ${BINDIR} >> ${LOG}
echo " " >> ${LOG}
##
## Connect to Source System and Retrieve PCNODE file.
HOST="sscprd.testing.com"
USER="ficoftp"
sftp ${USER}@${HOST} << EOF >> ${LOGDIR}/sftp_testing.logfile
  cd ../
  cd FTPOUT
  get ${DFILE}
  quit
EOF

echo " " >> ${LOG}
echo "#########" >> ${LOG}
echo FTP Operation Complete. File Listing: >> ${LOG}
echo "#########" >> ${LOG}
ls -lrt >> ${LOG}
echo " " >> ${LOG}
echo "###" >> ${LOG}
echo "Number of records in retrieved file: "`wc -l < ${DFILE}` >> ${LOG}
echo "###" >> ${LOG}
echo " " >> ${LOG}
echo " " >> ${LOG}
echo "#########" >> ${LOG}
TODAY=`date "+DATE: %m/%d/%y TIME: %H:%M:%S %p"`
echo ${TODAY} >> ${PAGERLOG}
echo "sftp_testing.JOB ending at: " >> ${LOG}
echo ${TODAY} >> ${LOG}
echo "#########" >> ${LOG}
cat ${LOG} > ${LOGDIR}/sftp_testing.shlog
mailx -s "sftp_testing Job: ${ORACLE_SID}" `cat ${HOMEDIR}/patrick_mail_list.txt` < ${LOGDIR}/sftp_testing.shlog
echo sftp_testing JOB COMPLETE.

Again, the problem is that the sftp command in the sftp_testing.job script is not able to connect to the remote server when the script is started by cron. If I start it from the command line of the prodid account it connects to the remote server with no trouble.

Note about the sftp command: I do not have to enter a password for sftp. The command to establish the connection with the remote server is sftp user@remote_server.

Many Thanks,
Patrick

When people say "works in shell, not in cron", 90% of the time it's the PATH, because cron's default path is way more minimal than the shell's default path. As I suggested before, either set a better PATH, or call programs by their absolute paths.

On my system sftp's absolute path is /usr/bin/sftp, I can't speak for yours; and /usr/ things almost surely aren't in cron's default PATH. Things like mailx and date might also need absolute paths. cat hopefully shouldn't on a sane system.

You might also try editing your cron line like

32 11 * * 1 /home/prodid/data/LMC/accept/WeeklyLabor/sql_loader/sftp_testing/sftp_testing.job abusmgt /appl/oracle1/product/10.2.0.4 /appl/oracle1 >> /path/to/file.log 2>> /path/to/file.err

so that when things go wrong, it dumps the error messages for you into these logfiles instead of hyperspace.

What Operating System and version are you running?

Which user owns the cron? Just need to know whether is is not root.

Does "crontab -l" retrieve the lines from the crontab as you expect?

You need to "export" all three variables. When run from your interactive account maybe they are already exported.

Any unredirected error messages from when the script is run from cron will be in unix mail for the account owning the cron.

Hello All -

It was a PATH issue. Adding the following at the beginning of the script containing the sftp command corrected the problem:

PATH=/app/share/bin:$PATH
export PATH

Thanks to all responders!

Sincerly,
Patrick

I had a similar problem, and I have resolved with this script:

#!/usr/local/bin/expect -f #<---insert here your expect program location
# procedure to attempt connecting; result 0 if OK, 1 elsewhere
proc connect {passw} {
expect {
-re ".*Are.*.*yes.*no." {
send "yes\r"
exp_continue
}
-re ".*PASSWORD.
" {
send "$passw\r"
expect {
"sftp*" {
return 0
}
}
}
-re ".*password." {
send "$passw\r"
expect {
"sftp
" {
return 0
}
}
}
-re ".*Password." {
send "$passw\r"
expect {
"sftp
" {
return 0
}
}
}
}
# timed out
return 1
}
# read the input parameters
set user [lindex $argv 0]
set passw [lindex $argv 1]
set host [lindex $argv 2]
set remotepath [lindex $argv 3]
set localpath [lindex $argv 4]
set file1 [lindex $argv 5]
set file2 [lindex $argv 6]
# check if all were provided
if { $user == "" || $passw == "" || $host == "" || $remotepath == "" || $localpath == "" || $file1 == "" } {
puts "Usage: <user> <passw> <host> <remote path> <local path> <file to send> [<file to send>]\n"
exit 1
}

# sftp to specified host and send the files
spawn /usr/local/bin/sftp $user@$host
set rez [connect $passw]
if { $rez == 0 } {
send "lcd $localpath\r"
send "cd $remotepath\r"
set timeout -1
send "put $file1\r"
if { $file2 != "" } {
send "put $file2\r"
}
send "quit\r"
expect eof
exit 0
}
puts "\nError connecting to server: $host, user: $user and password: $passw!\n"
exit 2

it copies this code in file SFTP.exp and then call from your shell in this way:
SFTPLOG=`${EXEPATH}SFTP_FOR_C1.exp ${USER} ${PASS} ${HOST} ${RPATH} ${LPATH} '*'`
and verify ${SFTPNUM} with number of records in retrieved file:
SFTPNUM=`echo "${SFTPLOG}" | grep "100%" | wc -l | awk '{print $1}'`
if [ ${SFTPNUM} -ne ${DFILE} ]
then
echo "WARNING: number of records......."
exit 1
fi

Your problem wasn't similar at all, and your "solution" is completely unrelated. Whether you use expect or not it still would not have been in the PATH.

Furthermore: sftp is designed to not let you use plaintext passwords. So does any sane authentication system, such as su and sudo. It's a security feature to prevent the plaintext password being stored and transmitted with insecure methods, and to slow down brute-forcing. That you had to bludgeon sftp into doing what you wanted with the external "expect" language is a subtle hint -- in mile-high, flashing neon letters -- that you're not supposed to do that.

There's much better, passwordless, secure, reliable authentication methods that don't require an external tool hack, and you'll find them in 30 seconds if you google "passwordless ssh". The poster of this thread is in fact using them already.

dear signor Corona,
I could have understood badly also, but my solution it was alone to help...
I don't need his useless lesson