How Unix tee to send pipeline output to 2 pipes ?

Hi,

I would like to process, filter the same ASCII asynchronous live data stream in more than one pipe pipeline.

So the one pipeline should filter out some records using grep key word
and more than one pipes pipelines
each should grep for another key words, each set seperately for each pipe.

There is a number of good examples from the net, how to output pipe result to terminal as well as to save to a file,
but I need to process the same live data stream by a number of pipes pipelines in parallel and output the results to seperate files on-the-fly.

It would work as fork.
How can I define a numer of parallel processes, reading the same
data stream in parallel , as in the example below ?

Jack

--------

Since tee can read the standard input, and write to multiple files, we may leverage this feature so that it writes to multiple processes (instead of files).
tee >(process1) >(process2) >(process3) | process4

Here's a simple example of how to do this. Run the following command to get a directory listing on your terminal, while also redirecting the output to a file named poop.out:
ls -al | tee poop.out

echo �hello world� | tee test.txt

follow-up

thanks to visitors,
any chance to discuss the issue ?

Jack

Hi.

The bash shell (and possibly others) can set up processes like that.

Or are you telling us that this is a technique that you are now using? ... cheers, drl

Hi,

I am looking for any technique to process data streams on-the-fly
not creating cache file or not saving data stream into a file for a postprocessing.
Frankly speaking I could save each record as a string and process it by another set of instructions.
As the same data stream is used for 2-way asynchronous transmission
I need to learn how to process live data streams made of a number of 2-way substreams.

First flow charting, than algorithm and code finally (selected shell script code as to make it easy to share for discussion with my friends).

Jack

Hi.

Here is an example using things you mentioned:

#!/usr/bin/env bash

# @(#) s1       Demonstrate bash connecting output to processes.

echo
set +o nounset
LC_ALL=C ; LANG=C ; export LC_ALL LANG
echo "Environment: LC_ALL = $LC_ALL, LANG = $LANG"
echo "(Versions displayed with local utility \"version\")"
version >/dev/null 2>&1 && version "=o" $(_eat $0 $1) tee
set -o nounset
echo

FILE=${1-data1}

echo " Data file $FILE:"
cat $FILE

# Remove files from previous runs.

rm -f vowels four

echo
echo " Results:"
cat $FILE |
tee >( grep '^[aeiouy]' > vowels ) >( grep '^....$' > four )

sleep 1
echo
echo " Auxiliary files created:"
wc -l vowels four

exit 0

Producing:

% ./s1

Environment: LC_ALL = C, LANG = C
(Versions displayed with local utility "version")
OS, ker|rel, machine: Linux, 2.6.11-x1, i686
Distribution        : Xandros Desktop 3.0.3 Business
GNU bash 2.05b.0
tee (coreutils) 5.2.1

 Data file data1:
red
orange
yellow
green
blue
indigo
violet

 Results:
red
orange
yellow
green
blue
indigo
violet

 Auxiliary files created:
 3 vowels
 1 four
 4 total

The main data file is run through tee, the output of which bash connects to a few grep processes. The count at the end verifies the grep - one looks for lines beginning with a vowel, the other looks for lines that have 4 characters total. You can replace the wc to look at the content rather than simply counting.

The sleep is there because the processes that tee was writing to were not quite finished. The first few times I ran it there were no data in the files.

See man pages for details ... cheers, drl

thanks, really excellent solution and it really works
now I have to replace data files by data streams (virtual devices or alike)
and build pipelines.
I don't know how to create output pipeline.
Redirecting file output >> to append to a file I risk generating oversized file.
So it should work as output device , like monitor, line printer, serial device
or TCP/IP port
Does shell support direct writing to a TCP/IP port or serial, virtual port, created and defined by shell self ?

Jack