Redirecting STDERR to file and screen, STDOUT only to file

I have to redirect STDERR messages both to screen and also capture the same in a file but STDOUT only to the same file.
I have searched in this formum for a solution, but something like

srcipt 3>&1 >&2 2>&3 3>&- | tee errs

doesn't work for me...

Has anyone an idea???

I never use this kind of redirection...

You might missing 1. not sure..
try this

Don't know if I understand your question correct, but this works I think:

script 2>&1 1> errs | tee errs

If no error, result is only in file "errs", if error result is both on screen and in file "errs".

Lets start with some general information to make the problem understandable:

A Unix process is like a (y-shaped) water hose: you fill something in (via <stdin>) and something comes out (via <stdout>) the one way and something else the other way (<stderr>). You can - to continue the analogy - put a bucket under each of the outlets, even the same bucket under several outlets, but they will still remain different outlets of data.

By default, when a process is born, its 3 default I/O-channels are directed to:

stdin: keyboard
stdout: display
stderr: display

As <stdout> and <stderr> are both pointing to display, why is it that

process_1 | process_2

picks up the output from <stdout> but not from <stderr>? The answer is that "|" is a special form of connector, not just another bucket like ">". "|" means: redirect <stdout> of process_1 to <stdin> of process_2.

Now there is another redirection device, which is:

process_1 2>&1

This means: redirect output channel 2 (=<stderr>) to where output channel 1 (=<stdout>) points to right now. All redirections are read and carried out from left to right.

Having understood this let us try to solve your problem:

script 2>&1

will redirect <stderr> to <stdout>, so the next "|" will catch <stderr> output now too. Closer!

script 2>&1 | tee -a /some/file

This will pick up everything coming out of <stderr> and <stdout> of script and display it as well as appending it to "/some/file". Closer again, but <stdout> should not be displayed, so we will have to direct it away before the pipe picks up its input:

script 2>&1 1>/some/file | tee -a /some/file

This finally does what we want: output to <stdout> is put into "/some/file", output to <stderr> is being displayed before being appended to /some/file" too.

The only uncertainty left is that i am not sure if the exact sequence of the messages will be preserved, especially if there is high load and many messages. You will have to try that. I'll be tankful if you could post a follow-up telling us this.

I hope this helps.

bakunin

1 Like

The y-shaped hose and bucket analogy paints a memorable image. Nicely done. The only thing I'd add, explicitly (it's implied in your explanation), is that pipe redirection occurs before other, left-to-right redirections.

Even without the pipe and redirections, you can't depend on your average script/executable to emit messages in the exact order that they're generated, since typically a mix of unbuffered (stderr) and buffered (stdout) streams are used.

Of much greater importance is that script 2>&1 1>/some/file | tee -a /some/file involves multiple processes writing to the same file without any form of communication. Writes from script will clobber writes from tee, or vice versa. If they at least shared a file descriptor, while one message could still split another in two, there would never be any overwriting.

Regards,
Alister

1 Like

Thank you for this, as well as the additional info. I was not aware that <stderr> is unbuffered while <stdout> is not, so i have learned more here than i have explained. Nice gain. ;-))

It is probably a good idea to do like i have always done (out of luck - you have given me a reason after all) in my scripts: prepend standard and error output with respective prefixes:

#! /bin/ksh
...
print -u1 "MSG: $(timestamp) , starting the action"
action
if [ $? -gt 0 ] ; then
     print -u2 "ERR: $(timestamp) , action did not work"
else
     print -u1 "MSG: $(timestamp) , action worked out well"
fi
...

bakunin

Bakunin, it appears that I was editing my previous post just as you were responding. I apologize for that incovenience.

Your suggestion won't work at all. As I mentioned in my post (probably after you read it but before you posted), script and tee will clobber each other since they're using file descriptors backed by independent file descriptions each with their own offset.

That will not work for the same reason as bakunin's suggestion.

That approach cannot be made to work because, from the tee-side of the pipe, the distinction between stdout and stderr has been lost; they've been merged into one stream.

The following should work:

{
    exec 3>&1    # Save the current stdout (the cat pipe)
    script 2>&1 1>&3 | tee -a /dev/tty
} | cat > logfile

That may look like a useless use of cat (Hi, Corona ;)), but writing to a pipe guarantees atomicity for writes up to PIPE_BUF bytes inclusive.

If you rather not have that second pipe, you can simply delete highlighted text, leaving only the logfile redirection. However, if you do that, you may see interleaving of messages even for small writes.

Regards,
Alister

1 Like

I've seen valid uses of cat on a single file before, but this is the first valid use I've seen for no files. Interesting. :smiley: This is, of course, assuming that these programs output lines atomically, but still.

I hadn't realized that files wouldn't necessarily be atomic, for that matter.

Uhm...
After a few tries:

$ { exec 3>&1; ls file none 2>&1 1>&3 | tee -a /dev/tty; } |cat >logfile
ls: impossibile accedere a none: File o directory non esistente

$ cat logfile
ls: impossibile accedere a nonefile
: File o directory non esistente

:frowning:

BTW: reading the thread, at first I've been a bit confused about what you all mean with "script": a script or the command /usr/bin/script? LOL.
--
Bye

Try ( ) instead of { } .

It's the same: "Each command in a pipeline is executed as a separate process (i.e., in a subshell)". So you have a subshell even with { } | ... . :slight_smile:

However:

$ ( exec 3>&1; ls file none 2>&1 1>&3 | tee -a /dev/tty; ) |cat >logfile; cat logfile
ls: impossibile accedere a none: File o directory non esistente
ls: impossibile accedere a nonefile
: File o directory non esistente

--
Bye

@alister:

So from now on we can say "alister told us so" when we are using useless use of cat :slight_smile:
No seriously, in my suggestion I was thinking either STDOUT or STDERR, and that was wrong (was testing with a simple command not a script). Now when I'm testing with a script that has output I see that output to STDOUT is overwriting some characters in STDERR in the resulting "logfile".

But in your suggestion, I don't understand what is happening, see comments:

{
    # ok, this is saving STDOUT to &3(?)
    exec 3>&1    # Save the current stdout (the cat pipe)
    # here STDOUT is set to &3 which is STDOUT that was saved above???
    # Why is that necessary?
    script 2>&1 1>&3 | tee -a /dev/tty
} | cat > logfile

The blame for that behavior almost certainly lies with your ls implementation. Which ls are you using?

I can reproduce that behavior with GNU ls. I cannot reproduce it with busybox ls. A closer look using strace (I booted a Linux system just for this ;)) confirmed my suspicions: GNU ls isn't even trying to write a line at a time.

This is how GNU ls and Busybox ls attempt to write an error message for a nonexistent file named idont :

# GNU ls
write(2, "ls: ", 4)
write(2, "cannot access idont", 19)
# In Lem's example, interleaving occurred here
write(2, ": No such file or directory", 27)
write(2, "\n", 1)

# Busybox ls
write(2, "ls: idont: No such file or directory\n", 37)

There is no way to fix or workaround that, short of fixing GNU ls (or whatever code it depends on for generating its error messages). Perhaps other ls implementations (and other utilities for that matter) suffer from that problem, but there's nothing that can be done about this at the shell level.

With that many writes for a single, relatively short error message, there's a good chance that another process will be given the chance to write to the shared pipe. Each write is still atomic, it's just the message that's broken.

Regards,
Alister

---------- Post updated at 04:31 PM ---------- Previous update was at 04:26 PM ----------

You need to save stdout at the time that it's pointing to the cat-pipe because, later, when the shell builds the script|tee pipeline and redirect's script's stdout to the tee-pipe, if not for fd 3, there would be no way to refer to the cat-pipe (which script needs so that its stdout can bypass tee).

Regards,
Alister

4 Likes

Yep, alister, my ls is ls (GNU coreutils) 8.5.

Just for the sake of curiosity, I was wondering whether we could try an ugly workaround (I know it's at least ugly): if GNU ls splits too much its error messages, what about a little pause between ls and tee ?

It seems to work. Here below the count of "nonefile"s gives us the number of
broken logs:

$ for i in {1..1000}; do { exec 3>&1; ls file none 2>&1 1>&3 | tee -a /dev/null; } |grep nonefile; done |wc -l
56

so 56 broken logs out of 1000 tries, versus:

for i in {1..100}; do { exec 3>&1; ls file none 2>&1 1>&3 | { sleep 1; tee -a /dev/null; }; } |grep nonefile; done |wc -l
0

0 broken out of 100 tries.

BTW: of course, if one doesn't care at all about having all the errors at the beginning or at the end of the log, a solution is obvious:

command 2>&1 >logfile | tee -a error; cat error >>logfile

or

command 2>&1 >output | tee -a logfile; cat output >>logfile

But... �a va sans dire. :slight_smile:
--
Bye

I realize that you are aware that your suggestion isn't elegant, but aside from that its utility is very limited. It may work for that small sample, but it won't scale. In this case, the delay is long enough for the system to copy the ls stderr writes to a kernel buffer and for ls to move on and flush its stdout stream before tee gets a chance to write to its stdout. But, if there were a lot of error messages -- e.g. from many non-existent files -- the delay wouldn't be long enough. Also, you couldn't simply extend the delay without limit because eventually an ls stderr write() would block and no matter how long you waited, the ls stdout stream would not be flushed before ls stderr messages flow downstream through tee to grep.

The following is a much better workaround:

{
    exec 3>&1
    ls iexist idont 2>&1 1>&3 | tee -a /dev/null | perl -lpe 'BEGIN {$|=1;}'
} | cat > logfile

The trace confirms that it's not just luck or timing; the writes are indeed coalesced into full lines:

# 4271: tee (since tee does not buffer, it appears to mimic GNU ls's write behavior)
# 4272: perl
4271  write(1, "ls: ", 4) = 4
4271  write(1, "cannot access idont", 19) = 19
4271  write(1, ": No such file or directory", 27) = 27
4271  write(1, "\n", 1)                 = 1
4272  write(1, "ls: cannot access idont: No such"..., 51) = 51

There are undoubtedly small utilities out there to absorb a stream and increase the level of buffering, but I have never bothered to search for them.

From the ugly-solution-dept: The following works, but it depends on too many unreliable implementation details to be a reassuring solution:

# ls (GNU coreutils) 8.13
# GNU bash, version 4.2.24(1)-release (x86_64-pc-linux-gnu)

{
    exec 3>&1
    ls iexist idont 2>&1 1>&3 | tee -a /dev/null | 
    {
	while IFS= read -r linebuf; do
	    printf '%s\n' "$linebuf"
        done
    }
} | cat > logfile

I definitely recommend the perl solution over a sh/bash hack, because there's nothing preventing the printf builtin from being fully-buffered when writing to a pipe. The correct solution to this problem requires line-oriented buffering (of course, of lengths <= PIPE_BUF).

Regards,
Alister

2 Likes

Of course, I saw this. But I though: "you work more, you just need to sleep a bit more..." . Ah ah, LOL. :slight_smile:

This is what I didn't think of. I see it now. Thanks.
--
Bye

Fantastic! This is one of the threads that keep me coming here again and again.

When i wrote my first post i had the following script as data source for tests (AIX 6.1 last TL):

#! /bin/ksh

print -u1 "This goes to stdout."
print -u2 "This goes to stderr."

exit 0

I understand now why that worked with my redirections but in other more real-life situations they might fail.

My focus is on writing scripts to accomplish tasks and i want my own messages (error and warning/info) to be as tolerant as possible, so here is what i suggest as a solution for writing custom scripts:

  • Put a time stamp in every line of output. Even if the lines in the output file will become disorganized, a simple "sort" will put them in order again. I use a certain "output format" for my lines, which is consistent across scripts:

[PID TIMESTAMP MSGCLASS message]

where MSGCLASS is either "Info", "Warning" or "Error".

  • Work similar to "syslog": all (error/warning/info) messages to a info log, and error messages also to a separate error log. This way you can avoid the elaborate redirection gymnastics for the usual things you want to achieve.

I hope this helps.

bakunin

Hi bakunin,
thank you for your notes! I tried your adjustments, but the STDOUT will always displayed on screen.

I need this for a cronjob. Actually everything (STDOUT & STDERR) is written to a log. But I want to get informed, when an error occurs.

As it is, alister did most of the work and you should thank him.

hmm, that is unexpected. With the example script i gave (see up this thread). I was able to get the result you wanted, albeit we learned from alister that it was more out of chance and won't work in larger scales.

A cron job has no terminal attached to it at all and output to <stdout> is usually mailed to the owner of the cron job. This is the reason why <stderr> and <stdout> in cronjobs are always redirected - you don't want to get all these mails.

If you want output to go to the "system console" (don't confuse this with a terminal - the console can be any terminal, but not every terminal is the console) use syslogs facilities instead of simple output. Syslog messages can be configured to go either to the system console or every terminal. An example for this would be the "shutdown" command, which usually prints a "The system is about to go down"-message on every terminal. This is done via a syslog facility.

I hope this helps.

bakunin

1 Like