Confused about redirecting stderr

I know that

mmmmm 2> error.txt

will send the error message to the specified file instead of the screen. However, I have seen

>&2

in some scripts, and I can't get it to do anything. A source said it sends stdout and stderr to a file. What file?

Ubuntu 18.04.2; Xfce 4.12.3; kernel 4.15.0-45-generic; bash 4.4.19(1); Dell Inspiron-518

mmmmm 2> error.txt >&2

will send both the stderr and stdout to the file error.txt

1 Like

That's called ( man bash )

You need to be careful about the order of the redirections; done wrongly, the result might not what you expect (see man bash , again).

1 Like

A command sends either fd1 or fd2, correct? That's why neither history 2> test.txt nor hissory 1> test.txt work very well?
I'm still trying to understand the duplicating part of all this.

Not quite. Picture a UNIX process as a sort-of garden hose for data: you pour something in on the top and something rins out at the bottom. So far, it should be quite clear. Now, where the UNIX process differs from the garden hose is that it doesn't have only one outlet but several of them. So you pour something in above and something comes out on several places at the bottom.

The places where something gets in (=> input) or out (=>output) of the process are called "I/O descriptors" because such an I/O descriptor really is just a - very generalised - intake/outlet for data. To really get input or generate output the I/O-descriptor has to point to some place where the output can be either displayed or printed. Taking again the garden-hose analogy: the I/O-descriptors are just the openings in the hose. Without anything going in or out they are useless. So you have to connect something to the input (i.e. the water tap), only then the hose will do something. Equally if you do not point the bottom opening to something the water just spills out and is lost. If you point it to a bucket it can be collected in it and hence used.

The same is true for these I/O-descriptors: per default there are three of them: <stdin>, <stdout> and <stderr>. And also per default all are pointing to the terminal a process was started at. That means <stdin> is connected to the keyboard (the input device of the terminal) and the others are connected to the display (the output device of the terminal). Two things are worth mentioning: in principle it is possible to use any output descriptor for any kind of data but there is a convention (hence the name of these) that "normal data" goes to <stdout> and diagnostic messages go to <stderr>. Also, per default there are three such I/O-descriptors open: 0 is <stdin>, 1 is <stdout> and 2 is <stderr>. More (IIRC up to 9) you can create on purpose. For instance:

# ls -l /home /does/not/exist 
ls: cannot access '/does/not/exist': No such file or directory
/home:
total 20
drwx------  2 root    root    16384 Jul 22  2018 lost+found
drwxr-xr-x 42 bakunin bakunin  4096 Feb 17 00:55 bakunin

The line "cannot...." is a diagnostic message and went to <stderr>, the rest is normal output of the ls command. Either of these channels can be redirected to somewhere else - in this case to /dev/null , a file that just devours unwanted output:

# ls -l /home /does/not/exist >/dev/null
ls: cannot access '/does/not/exist': No such file or directory

# ls -l /home /does/not/exist 2>/dev/null
/home:
total 20
drwx------  2 root    root    16384 Jul 22  2018 lost+found
drwxr-xr-x 42 bakunin bakunin  4096 Feb 17 00:55 bakunin

# ls -l /home /does/not/exist >/dev/null 2>/dev/null

Do you see a pattern? When i wanted to redirect <stdout> (I/O-descriptor 1) i wrote:

# ls -l /home /does/not/exist >/dev/null

which is a shortcut for the equally correct:

# ls -l /home /does/not/exist 1>/dev/null

when i wanted to redirect <stderr> or I/O-descriptor 2 i wrote:

# ls -l /home /does/not/exist 2>/dev/null

Now, suppose i want to redirect both these output channels to the same place. I could write i.e.:

# ls -l /home /does/not/exist >/some/file 2>/some/file

But this might be prone to typos as i could write different filenames as they become longer and longer. Therefore i could also write:

# ls -l /home /does/not/exist >/some/file 2>&1

The first part you know already: > /some/file means redirect <stdout> to /some/file . The second part 2>&1 means redirect <stderr> to where <stdout already points (whereever that is).

You can redirect other output channels the same way. i.e. 4>&6 means: redirect I/O-descriptor 4 to where I/O-descriptor 6 is pointing at right now. These redirections are read from left to right. The respective output stream doesn't even have to transport any data to be redirected. It could be used to just "store" the redirection of another I/O-descriptor that is being redirected.

You can also give general redirections for complete code blocks with the exec command:

command1                # nothing redirected
exec 3>/some/file       # everything leaving via I/O-descriptor 3 is now landing in /some/file
command2
command3
exec 3>&-               # close this redirection
command4                # nothing redirected again

Suppose some I/O-descriptor is redirected to a location you do not know. You want to temporarily redirect it to somewhere else and then redirect it back to where it was. It is not possible to directly find out where it points at, but you can use another I/O-descriptor to "store" the location by letting it point to where this I/O-descriptor already points, then restore the other one back to that place. Watch I/O-descriptor 3 being manipulated in the example:

exec 9>&3
command 3>/some/place
exec 3>&9

I let 9 point to where 3 points at, then redirect 3 and finally redirect 3 to where 9 now points at, because that is where 3 has pointed to before.

Also notice that up to now we always used > for ou redirections. If the redirection is to a file this file will be deleted and then recreated that way. You might want to preserve what is in this file, though. Use >> instead in this case, then.

Here is a way to to make sure a file exists but is completely empty:

: >/some/file

: is the "null-command" and does (and outputs) nothing. Redirecting this to the file truncates it to zero or creates it with zero length if it doesn't exist already.

I hope this helps.

bakunin

2 Likes

I think the following example makes more sense.

# Save the stderr destination in descriptor 9 then let stderr point to errorlog file
exec 9>&2 2>errorlog
command1
command2
# Restore stderr
exec 2>&9

The same can be achieved with a command group (code block)

#Group the following
{
command1
command2
# The group has stderr redirected to errorlog file
} 2>errorlog
# Outside the group the original stderr remains.

The

: >file

is most correct, while all shells also take

>file

.

1 Like

Wow! Thanks much. :slight_smile:
I believe I found a use for >&2 , uncommon though it may be: redirecting stdout to where stderr was previously redirected to.

{ ecko "Hello" || echo "World" } 2>error.txt >&2

resulting in

Command 'ecko' not found, did you mean:
  command 'echo' from deb coreutils
Try: sudo apt install <deb name>

World

Well most common use of redirection is more redirecting STDERR to STDOUT:

 ls -al  . >ls.out 2>&1 

I tend to keep both separate specially when using cron, as if in a rush, if you redirect all to error.log you are to read the whole file just to look for a possible issue but if you keep them separate:

my_cron_job >>my_cron_job.log 2>my_cron_job.err

Now my expectations is to see my_cron_job.err file size = 0 meaning all is well... and no point to look at my_cron_job.log if I am busy as all I will find there are the results of the job that executed correctly as the .err is empty...

1 Like

I use another form in a way too:-

echo "This is my debug message" >&2

This way, I can write my own messages to the standard error file (or indeed any other file descriptor I choose to define) which are easier to ignore if called as a function or from another script, e.g.:-

#!/bin/bash

# Call so_and_so ignoring my errors
so_and_so 3>/dev/null

# Call so_and_so collecting std output & my debug as a variable
my_var=$(so_and_so 3>&1)

These tricks can be useful to put loads of debug info into scripts then ignore them when you are happy without removing them. Of course, you can then see all the useful information you have previously set up quite easily too.

Just my thoughts,
Robin

1 Like

Isn't >&2 the same as 1>&2 , both of which say "Send <stdout> to where <stderr> is pointed towards (the terminal)?
Or, stated another way, "Send 1 to where 2 is going, which by default goes where 1 goes". If so, I can't understand the point of >&2 .

It does not. You need to differentiate between "by default" and "by habit" here. With your interactive sessions, you have both pointing to your terminal by habit. But running a command or script may change that immediately - they may have a different opinion of what their errors should go to. And even more so for system programs, e.g. daemons, which will off the shelf log to log files and print errors to their stderr. And this is what rbatte1 shows: a simple "error log" command for use inside (standalone?) scripts, assuming stdout and stderr have been split when / before calling / running,

echo prints to stdout, so you need to redirect stdout to the stderr file descriptor for the "error log command".

EDIT: LOL - trying to prove my point, I thought I take Xorg as an example, but it has both redirected to /var/log/.../x-0.log Still, many dbus programs have fd 1 pointing to /dev/null and fd 2 to ~/.xsession-errors

1 Like

Yes, by default stdout and stderr go to the terminal but in different streams.
Each of them can be redirected to an individual target, then they are separatate.
Say you run a script with redirecting stderr to errorfile

/path/to/script 2> errorfile

Then in the script

echo "This is my debug message" >&2

goes to errorfile.

You can do all that in the script:

exec 2>errorfile
echo "error 1" >&2
echo "error 2" >&2

Note that only the "exec" opens the file.
Then each >&2 write goes to the stream, i.e. appends to the file.
Or you can redirect a code block:

{
echo "error 1" >&2
echo "error 2" >&2
} 2>errorfile
# The code block has ended. The follwoing goes to the original stderr again.
echo "error 3" >&2

Further you can nest code blocks

{
{
echo "error 1"
echo "error 2"
} >&2
echo "not an error"
echo "error 3" >&2
} 2>errorfile

Last but not least, a if-then-fi or while-do-done or for-do-done or case-in-esac are code blocks.
This often makes sense with a while-do-done loop.

while read line <&5
do
  echo "$line"
done 5<inputfile

You do not need an explicit block around the while-do-done loop

{
} 5<inputfile
2 Likes