process

Hi, I have a job which I need to run every 15 minutes, however, I need to ensure that previous job (15 minutes) is done before I can start a new one, how can I do that? THANK YOU SO MUCH!

#!/bin/ksh

PID=$$

if [[ -f /var/run/script.pid ]]
then
  OLDPID=`cat /var/run/script.pid`
  RUNNING=`ps -e | grep $OLDPID`
  if [[ ! -z "$RUNNING" ]]
  then
    exit 1
  fi
fi

echo $PID > /var/run/script.pid

# your code here

if [[ -f /var/run/script.pid ]]
then
  rm /var/run/script.pid
fi

Carl

Use a temporary lockfile containing the process id, e.g...

#!/usr/bin/ksh

LOCKFILE=/tmp/lockfile

#---At the start of your script check to see that a lockfile exists
if [[ -f $LOCKFILE ]]
then
    #----If the lockfile does exist then check that the process is still running
    #     since it may have aborted and left the lockfile behind
    if ps -p $(<$LOCKFILE) >/dev/null
    then
        echo job is still running
        exit
    fi
fi

#---Must be okay to run, so create the lockfile containing the process id
echo $$ > $LOCKFILE

#----Rest of script goes here
:

#----end
rm $LOCKFILE

Not tested.

Just thought of including this is as a note.

When creating a lockfiles to control process from spawning when the previous instance are still running.

It is better to avoid common names to the locking files and redirecting just the process id of the process to the control file.

First if common names are use, there is high probability that a same kind of naming convention (same name to the locking file ) be used by other scripts as well so that it would also use the same filename for its own purpose.

If process id is used as a value in the locking file, in a busy system there is a high possibility that a process 'A' running with pid -> pid1 is done with its work and again system can grant a new process 'B' the same pid -> pid1 and we end up controlling a process that shouldn't be actually.

Hence its better to add some more information like parent process id or timestamp something like that to guarantee the uniqueness.

And the final thing could be to lock the file with perm bits once it is written, so those process which tend to overwrite them will receive an error. Though this is not so secured this way is a bit ahead.

thank you so much for all your excellent advices! However, I would like to add one more detail, I actually have 2 jobs (using the same script with different parameters) starting at the same time, using the provided method would be work, is there any work around?

Change the name of the lockfile depending on the parameters, e.g...

LOCKFILE=/tmp/$(basename $0).$(echo $*|tr ' ' '.')