How to check if the script is already running?

I have one shell script in NAS shared location. Since NAS is mounted in many servers, script can be run from any of those servers.

I want to make sure if the script is already running, it should not allow others to run it. I googled it and got some idea that i can touch one empty file in the beginning of script and remove it at the end of script and look for that file existence to find if the script is already running.

But in my case, It wont work sometimes. Because my script will run for around 10 mins. Suppose if i cancel (CTRL+C) the script in middle when it is running, it will cancel the execution and it is leaving the file (touched at beginning) which i created in script beginning.

Please help me how can i achieve it. i.e, if the script is already running, it should not allow to execute again and it should echo some warning.

for eg; script file name: report.sh

You can build a trap into your script, so that when you press control-C or control-\ or kill the script, the file will be removed.. If you put this at the beginning of your script:

trap "rm -f somefile; exit" SIGINT SIGTERM SIGQUIT
1 Like

Thanks Scrutinizer..

But sometimes few persons are giving CTRL + Z to run it in background and they are forgetting to let it run in bg. At that time if another person run the same script again means i am not getting the desired output from the script as both the script execution will write in same output file.

So by considering this and bcoz of some other reason, i decided to place the file on server local filesystem and can be run from that server only. i know grep is not suitable one here as if i use 'ps|grep report.sh' command it ll fetch other occurrences as well (i.e less report.sh, vi report.sh, etc). In this case can i use any other ways to check if the script is already running.

But, why are they both writing to the same file? If you could just change that, you wouldn't have to worry about it.

I often use /tmp/$$ as a temp file, the process' PID number. There will only be one process with a particular PID at any given time. If you want a less predictable name for security reasons, see 'mktemp'.

alternatevely you could use lsof or fuser (depending on you OS) supplying the names of your script/binary and script around the output - that what I usually use.
E.g.:

lsof -t -c nameOfScript

@vgersh99, Will it kill the existing process that were running the script already?

@Corona688, According to my script i have to redirect the various output to different files, so i have given the logfiles name explicitly. I will consider to use $$ PID in my logfiles.

Let me see if I understand the request correctly: You have a NAS mounted on a number of servers sharing a shell script on that NAS. Only one single server is allowed to run one instance of the script at a given point in time, all others should be locked out during that run. The result is server dependent and should be stored locally on the respective server.
As it is many servers, local lock mecanisms won't work. Unless the OS offers lock mechanisms across all servers, you have to create one on your own.
Reading through your posts, I'm not sure if you differentiate between the lock file and the result file. You need both of them. While you can store the results locally, you should create the lock file some place available to all servers. One example - not being everybody's preferred choice - would be the script's path on the NAS, or e.g. a "lock" directory thereunder. You could use a simple file name, or a "personalized" one as Corona688 proposed, and use Scrutinizer's trap to remove it at script exit - not forgetting for the "normal", healthy script end.
No running scripts should be killed by the mechanism, but the nth instance should quit graciously when it sees another instance running, possibly logging such event when quitting.

When multiple files are involved I often do /tmp/$$-loggerdata, /tmp/$$-output2, and so forth. Don't forget to delete them when no longer needed or you'll have a mess.

This would be far better than just making exclusion, you'll be improving the usability of your program and removing a lot of potholes.

For locking across multiple servers with shared storage I would suggest that mkdir is better than a simple file creation because of the race condition that would leave. A mkdir will either work (return code zero, it did not exist and I created it) or fail (return code 1, it already exists or I cannot create it for some other reason)

The problem you have of cancelling or suspending a job is going to be a tough one to crack. If someone suspends their job for a long time and then re-enables it, you would expect that instance not to be getting confused by other requests, after all it was able to create the lock (however you did this)

You could, I suppose, disable interrupts with stty -isig (not sure if that's quite correct) and put in the trap suggested above, including the EXIT signal (so it cleans up when the script completes or aborts)

You would have to consider what happens if the server crashes and the lock cannot be cleaned up.

I know it sounds a daft question, but what is the conflict between the various servers running this at the same time? I just don't understand the need, so please help me and there may be a better way to organise this.

Kind regards,
Robin