sudo using at failing.

Hi,

I'm hopin' ya can give me an idea or two here.

I'm writing a file transfer tracking program. Users login via FTP or https. These users have NO shell access. I'll get to that in a minute. When they upload or download a file, a script is invoked to log the transfer in a database, send an email to the appropriate people, and schedule the file for deletion. The files are owned by internal users. The scripts are spawned by external users, hence the need for sudo. Otherwise file permissions wouldn't allow deletion.

Here's the problem. When I schedule the file deletion it doesn't happen. If I delete the file immediately it works. Here's what I have:
Works:

# Delete file NOW
sudo /bin/rm -f "/$rootdir/$vendor/$outgoing/.$line"

Doesn't work:

# Schedule for deletion
echo "sudo /bin/rm -f \"/$rootdir/$vendor/$outgoing/.$line\"" | \
      sudo /usr/bin/at now + 2 minutes 2>/dev/null

The at parameters above are for testing. In reality the files would get deleted at midnight 2 days after download, or 30 days after upload. I've tried a hundred different variations on the at command.

Here's an example of the spooled at job:

#!/bin/sh
# atrun uid=0 gid=0
# mail     root 0
umask 22
HOME=/ftp/vendor1; export HOME
SHELL=/etc/ftponly; export SHELL
LOGNAME=root; export LOGNAME
USER=root; export USER
USERNAME=root; export USERNAME
PATH=/usr/bin:/bin; export PATH
SUDO_COMMAND=/usr/bin/at\ now\ +\ 2\ minutes; export SUDO_COMMAND
SUDO_USER=vendor1; export SUDO_USER
SUDO_UID=1010; export SUDO_UID
SUDO_GID=1000; export SUDO_GID
cd /usr/libexec/usermin/updown || {
         echo 'Execution directory inaccessible' >&2
         exit 1
}
${SHELL:-/bin/sh} << `(dd if=/dev/urandom count=200 bs=1 2>/dev/null|LC_ALL=C tr -d -c '[:alnum:]')`

sudo /bin/rm -f /ftp/vendor1/outgoing/file1.zip

Since the users don't have shell access... Could that be stopping the at job for working? What doesn't make sense to me is that it would work via a straight sudo, but not a queued instance.

Any ideas???

Thanks

Start by typing

which sudo

And see if the path is included in PATH of your script...

Yep, It's definitely in the path. I've tried it with and without the full path in the command line. In my haste in posting yesterday.

I forgot to mention that FTP users seem to have no problem here. I'm using vsftp as the server software. All instances of the at command are spawned as the user ftp-files, an internal user with shell privileges. Users that access the site via https however spawn all processes as the user they login as. I.E. external users with no shell privileges. Again the weird thing is that the immediate removal works at doesn't. Both means of access trigger the same script file, the only difference is the user name.

I've also noticed that depending on whether or not I use sudo for the at command itself the user under which it's spawned will change as well.

If nothing else, I've at least got to look forward to the "Brick Wall" until I get this figured out.:rolleyes:

-----Post Update-----

It may not be right, but I've found a work around.

I had to call the script that spawns the at command via sudo. For whatever reason calling the at command directly wouldn't work sudo or not. I even tried just echoing text to a file. Nothing... Nothing in /var/log/anything. Just nothing.

Backing up one script and making the sudo call there worked. Although I still want to figure out why the direct call didn't work, I'll have to put it aside for now and finish my task.

Thanks for the input.

Maybe user not in at.allow (or in at.deny...)?

Now I KNEW there was a reason I asked the experts.

That never even crossed my mind. I can hear Jim Varney rolling over in his grave sayin' "Golly Bob howdy Vern, Why didn't ya think 'o' that"!!! :D:D:D

One thing that makes me think it mightn't be it. Would the at job be allowed to schedule itself if the user wasn't allowed to use at? Wouldn't there be some output to a log... somewhere???

I'll definitely test this out if I can this afternoon, if not Monday morning. (Trying not to work (well think too much) on the weekends). I'll be sure to post whatever I come up with.

THANKS!!!

I imagine your "2>/dev/null" is supressing an error message, try redirecting it to a log file to debug.

# Schedule for deletion
echo "sudo /bin/rm -f \"/$rootdir/$vendor/$outgoing/.$line\"" | \
      sudo /usr/bin/at now + 2 minutes 2>/tmp/at.err

If that's not the problem, perhaps an strace command would help debug what at is up to:

# Schedule for deletion
echo "sudo /bin/rm -f \"/$rootdir/$vendor/$outgoing/.$line\"" | \
      sudo strace /usr/bin/at now + 2 minutes 2>/tmp/at.err

--
qneill

I've run in several times with and without piping to null and pipeing to a file, etc... Both the actual command and the at scheduling.

Strace might very well be the best option.

Thanks, I'll give it whack Monday!

Also see what happens if you change

SHELL=/etc/ftponly; export SHELL

to use a standard shell such as sh, bash or ksh.

Sometimes one (I) must step back, and see the obvious. That's a fantastic idea. It goes to the top of the list for Monday.

Another one of those "Why didn't I think of that" things. Especially since I even complained that a non-valid shell might be the reason.

Must have been code blind.

Thanks!

Well, giving the user a valid shell works!!! The only problem is... I can't use it. But, No problem, the workaround I described before works as well.

Thanks to all for your help! It's been most invaluable.