jobs run parallel - server consumption?

I have a requirement where jobs/scripts need to be run in the background.The concern here is there are around 20 scripts which need to be run in the bg.Does running all the 20 scripts/job at the same time in bg consumes much sever-utilization. If so wot would be the efficient way to run the jobs parallely with less server consumption. Im using ksh.Pls throw some light.

The bottom line is that it depends on exactly what those 20 scripts are doing. If they are all starting a process which consumes a large amount of memory on the machine, or each is doing a large file transfer, then yes it is possible that you will impact the machine by running all 20 at the same time. On the other hand, these scripts might not significantly contribute to the load on the machine and running them all concurrently will be fine.

If you decide that they must all not be executed concurrently, then you could build a simplistic 'starter' into your driving script that starts the maximum that you deem safe to run concurrently, waits for those to finish, and starts more; this would continue until all scripts have been executed. If you need more sophisticated scheduling (dependencies and/or load average consideration), then you'll need to think about using a scheduler to manage the jobs.

This may be vague, but without knowing any details about the potential resource consumption of the scripts (CPU, memory, I/O), or the hardware (installed memory, number of cores, etc.), it's difficult to guess.

You can run them with 'nice' to prevent them from lagging the system too much. It won't prevent them from consuming resources, but other things will get priority over them for CPU time.

Several warnings for you:

  1. Never run a job in the background without storing its PID or waiting for it to end
  2. Never run a process in the background and quit
  3. If your shell script gets too complicated then it might be a good idea to use a programming language like perl, python, java, etc.

I have seen some "framework" that executed:

nohup $0 -magic-argument $* &

As a result user called a command and then had a disowned process running in the system. It might never end. It might do whatever it wants. You cannot tell when it finished.
As a workaround people are using "ps -ef|grep scriptname" but it fails for long paths as ps won't show full path then.

Now the performance - if there are only 20 processes then you should'n worry about it unless your server is very limited (like 4MiB of RAM, 50MHz CPU). Everything depends on: how often does it happen, what extra operations are performed for every process (ex. loading of a complex environment) and how many resources are consumed by the processes. It might be that your server is not capable of running even a single process like that (ex. the process might try to backup whole internet :wink: ).

If you run the jobs in parallel then usually the purpose of this is that they can consume more server resources, so that more work gets done in less time. Just test he effects first on a test computer as you would do with anything before you run it on a production server. You can tune it by running fewer/more processes in parallel or by scheduling the processes to a time period when there is little interactive use of the server...

Thanks all for the suggestions given..