Background processes in a pipeline don't run asynchronously?

I'm wondering about what causes this behavior. Mainly, given the first case below, why the second changes the behavior of ''&'' to execute sequentially merely because the command group containing calls to f() is part of a pipeline. The 4th example bizarrely once again works as the first, even with the piped output. The behavior of the 5th example I suspect has a different cause, but is interesting nevertheless.

Each f() reports it's identity ($1) on stderr each time through it's read loop so we know whether they're being executed concurrently, then prints the totals after it's stdin is exhausted. For each example, each f shares a stream of 20 zeros.

Bash 4.2:

#!/usr/bin/env bash

set +m -f
shopt -s lastpipe

f() {
    local -i x y                   # Store each character in $x. $y is just a counter.
    while read -rN1 "x[y++]"; do
        printf '%d ' "${1}" >&2    # keep track of which job this is.
    done
    printf "${#x[@]} "             # Print the total number of reads by each job.
}

g() {
    f 1 <${1} &
    f 2 <${1}
}

declare -i ex=1

echo "example $((ex++)):"
# This works as I expect, f is backgrounded and two readers of one pipe each get about half the input:

read -ra x < <({ f 1 & f 2; } < <(printf '%.s0' {0..20}))
printf '%b\n' "\n${x[@]}\n"

echo "example $((ex++)):"
# In this equivalent version, f is not backgrounded, and one reader consumes all of the input, I can only assume for some reason because it's part of a pipeline:

{ f 1 & f 2; } < <(printf "%.s0" {0..20}) | {
    read -ra x
    printf '%b\n' "\n${x[@]}\n"
}

echo "example $((ex++)):"
# Same as above. Unsafe wordsplitting for brevity.

printf '%s\n' $'\n'$(printf '%.s0' {0..20} | { f 1 & f 2; })

printf '\n%s\n' "example $((ex++)):"
# Identical to the above two examples, except rather than using the command group's stdin, save the FD to $x and redirect individually. Now it behaves as the first example again. (WTF???)

{ f 1 <&${x} & f 2 <&${x}; } {x}< <(printf "%.s0" {0..20}) | {
    read -ra x
    printf '%b\n' "\n${x[@]}\n"
}

echo "example $((ex++)):"
# In this version, the name of the pipe is passed to g and then individually redirected to f, but it doesn't work. No matter how much data it gets, the pipe is closed before the second call to f sees it:

read -ra x < <(g <(printf "%.s0" {0..20}))
printf '%b\n' "\n${x[@]}\n"
wait

# vim: set fenc=utf-8 ff=unix ts=4 sts=4 sw=4 ft=sh nowrap et:

output:

 $ ./pipefork 
example 1:
2 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 2 1 2 2 1 
13
10

example 2:
2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 
1
22

example 3:
2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
1
22

example 4:
2 1 2 1 2 1 2 1 2 1 1 2 1 1 2 1 1 2 1 2 1 
10
13

example 5:
1 1 ./pipefork: line 16: /dev/fd/63: No such file or directory
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 
22

I think , in g() function , when ${1} fd goes to background ,f() function process at once,after then pipe is missing for that created 63fd..
therefore `/dev/fd/63` fd can not found by the g function.so g() function can not call f() function via same fd at the same time..
maybe you can try like this :wink:

..............
g() {
exec 4<${1}
f 1 <${1} &
f 2 <&4
}
...............
.........

regards
ygemici

You wouldn't want exec there. Better would be to put the redirect after the function definition. Yours is essentially equivalent to the example before it.

Anyway my question was mostly answered on the Bash mailing list.

lists.gnu.org/archive/html/bug-bash/2011-10/msg00019.html

POSIX says that background processes are always supposed to implicitly have their stdin redirected to /dev/null. Bash for some reason only sometimes follows this, and the pipe somehow affects whether that happens. The background processes shouldn't ever be getting input unless stdin is explicitly redirected. The 5th example from an updated version of the above shows another strange behavior where stdin of a list is determined by any redirects in the list.