background processing in BASH

I have script 3 scripts 1 parent (p1) and 2 children child1 and child2

I have script 3 scripts
1 parent
2 children
child1
child2

In the code below the 2 child processes fire almost Instantaneously in
the background, Is that possible to know the status of pass/fail of each
process "as it happens" ?

In the present scenario although Child2 failed first ( exit 1 ) the status is not displayed until Child1 is complete.

I would really apprecitate your help.

Mother Process:

#!/bin/bash

echo -e " Parent continued process 1"
echo -e " Parent continued process 2"
echo -e " ** Kicking off a child process C1** "
./child1 &
t1=$!
echo -e " Parent continued process 3"
echo -e " Parent continued process 4"
echo -e " ** Kicking off a child process C2** "
./child2 &
t2=$!

wait $t1
if [ $? -ne 0 ]
then
echo " Child Process C1 failed !!! "
fi

wait $t2
if [ $? -ne 0 ]
then
echo " Child Process C2 failed !!! "
fi

exit 0


Child1
------
#!/bin/bash
echo -e " in child process 1 "
sleep 2
echo -e " in child process 2 "
sleep 2
echo -e " in child process 3 "
sleep 7
exit 0 # success

Child2
------
#!/bin/bash
echo -e " in child process 4"
echo -e " in child process 5"
echo -e " in child process 6"
exit 1 # failed

Thanks,
SSR.:b:

Why not create a file, pipe the child process output to the file, and read the results FROM the file after the child exits? The basic technique is ancient, easy, and portable over many operating systems.

is there a way besides that ?

This is bash we are talking about, there are ALWAYS other ways!

One might be to write a pipe for each child process, let the child feed its pipe, and let the parent read them. This is WAY overkill for your example, and there are opportunities for failure, but I have used it successfully. Google for examples.

You might consider defining the child processes as functions instead of scripts, then you do not have the extra I/O.

this is just a prototype here, but in real world these are different scripts which are running. How feasible is it to be defined as a function ?

That depends upon what the final scripts will hold and use, how much data end/or environment they need to have in common with the parent, etc. If they are short and simple extensions of what you already have then they make good functions. If they are extensive or do many things, you may be better off leaving them scripts.

Is there some reason that you do not want to use files? I have used the technique for recursive scripts that ran dozens of copies for very short periods of time to gather lots of information very quickly: it works!

The only problem I see with using files is I have to add lines to the child scripts to write a status to a log which my main process keeps reading.

If you need the child processes to calculate or aquire data and exit, leaving the data for the parent, then files are appropriate. If you need the child processes to continue running and gathering data and STREAM it to the parent, then file are NOT appropriate. For persistent processes: pipes and IPC features are your friends.

The example scripts you provided may have mislead me as to the nature, scope, and conditions of your problem. Can you illuminate the issue better, that I might understand better your constraints and conditions?

Someone once said: "every problem has an elegant, simple, straightforward and easy to understand wrong solution". (Pahaphrased, here, because my memory for quotes resembles a tuna net: it loses all the small stuff!)

Okay, here is the task

Presently I have more than 2 scripts running independently. They run okay and return me a code of 0 for success and more than 0 for failure.
The task is to create another main script and run these independent scripts in parallel.
I am able to achieve this by running them in background and waiting on PID to get the status of each script. The only problem is, that status
of a script ( success or fail) is not known until the "wait" command is processed sequentially by the Linux processer.
I want to know the status of each process at real time.

Is that possible?

I have an example with on of your recommendation using log files which is working. But I am wondering if there is a nifty way to achieve some other way.

Thanks,
SSR

Well, if you want real-time response do not wait upon 'wait'!
Again, there are several solutions.

One easy one is to spawn all of your sub-processes, capturing each PID into a list and the output into an individual results file for that sub-process. Loop through the list, gathering and acting upon the results if they have completed. As part of your processing, if they have completed: then remove them from the list and clean up their results file. When your list is empty, exit the loop and take your closing actions.

Again, you could use an FIFO pipe to accomplish the same thing, but that would be overkill if all they will do is succeed or fail and send back that result.

Is it possible for you to display the loop process with a simple example or use the one that I have provided ?

This is what I got using a log file , I get what I am expecting to acheive.
Although Child1 is fired first , the error is deteced in the order it happend.

Mother Script
-------------
#!/bin/bash
echo -e " Parent continued process 1"
echo -e " Parent continued process 2"
echo -e " ** Kicking off a child process C1** "
./child1 &
echo -e " Parent continued process 3"
echo -e " Parent continued process 4"
echo -e " ** Kicking off a child process C2** "
./child2 &

while true
do
grep "Error:1" ch1.log
if [ $? -eq 0 ]
then
echo " error in child 1 detected "
break
else
continue
fi
done&
t1=$!

while true
do
grep "Error:1" ch2.log
if [ $? -eq 0 ]
then
echo " error in child 2 detected "
break
else
continue
fi
done&
t2=$!

wait $t1
wait $t2
exit 0

child1
-------
#!/bin/bash

echo " " > ch1.log
echo -e " in child process 1 child1 "
sleep 2
echo -e " in child process 2 child1 "
sleep 2
echo -e " in child process 3 child1 "
sleep 7
echo " Error:1" >> ch1.log
exit 1

child2
------
#!/bin/bash
echo " " > ch2.log
echo -e " in child process 4 child2 "
echo -e " in child process 5 child2 "
echo -e " in child process 6 child2 "
echo " Pretend this script crapped first Error:1" >> ch2.log
exit 123

Output :
-------
Parent continued process 1
Parent continued process 2
** Kicking off a child process C1**
Parent continued process 3
Parent continued process 4
** Kicking off a child process C2**
in child process 1 child1
in child process 4 child2
in child process 5 child2
in child process 6 child2
Pretend this script crapped first Error:1
error in child 2 detected in child process 2 child1
in child process 3 child1
Error:1
error in child 1 detected

This idea works. Note the comments, many are lines I used to determine what was going on when I had a typo. If you think FOREGROUND typos are fun, wiat until you have some in the BACKGROUND!
Note also that pasting here takes away all of my careful indenting that would make the script more human readable. (Sigh)

#!/bin/bash
#
# background processing test

LLIST="case1 case2 case3"
ID=$$

mkdir /tmp/${ID}
LOGDIR=/tmp/${ID}
LLOG=${LOGDIR}/looptest.tmp
RANDOM=16552
# echo "LOG DIRECTORY = ${LOGDIR}"

function mytest {
# function to represent your external script(s)
VAL=$1
WHEN=$2

sleep $WHEN

if [ "$VAL" == "case2" ] ; then
echo "0"
else
echo "1"
fi
echo "DONE"
} ; # end of function or external script

OLIST="$LLIST"
for foo in ${LLIST} ; do
# shall we start them all
LOG=${LOGDIR}/${foo}.tmp
(( j = 5+$RANDOM*10/32767 ))
# RANDOM sleep just to make things wait a bit
echo "start ${foo} with wait $j"
mytest ${foo} ${j} > ${LOG} &
done

# then start a loop to look through the results files
# for completion
TLIST="${OLIST}"
while [ -n "${TLIST}" ] ; do
TLIST=''
# sleep 1 # just to slow things down.
for foo in ${OLIST} ; do
LOG=${LOGDIR}/${foo}.tmp
if [ -e ${LOG} ] ; then
# echo "${LOG} exists"
if grep DONE ${LOG} >/dev/null 2>&1
then
MYRETURN=`grep -v DONE ${LOG}`
echo "${foo}: ${MYRETURN}" |tee -a $LLOG
else
# echo "${LOG} exists but is not DONE yet"
TLIST="${TLIST} ${foo}"
fi
else
# echo "${LOG} does not exist"
TLIST="${TLIST} ${foo}"
fi
done
# echo "TLIST = ${TLIST}"
OLIST="${TLIST}"
done
echo "

RESULTS SUMMARY
"
cat $LLOG

rm -rf ${LOGDIR}