Lost redirecting stderr & stdout to 3 files - one each plus combined

Hi folks

I need/want to redirect output (stdout, stderr) from an exec call to separate files. One for stderr only and two(!) different (!) ones for the combined output of stderr and stdout.

After some research and testing i got this so far :

(( exec ${command} ${command_parameters} 3>&1 1>&2 2>&3 ) | tee -a ${FILE_LOG_TEMPORARY}.stderr ) >> ${FILE_LOG_TEMPORARY} 2>&1 

So got my stderr and one stdout+stderr log files and only need a "copy" of FILE_LOG_TEMPORARY, sort of

(( exec ${command} ${command_parameters} 3>&1 1>&2  2>&3 ) | tee -a ${FILE_LOG_TEMPORARY}.stderr ) >>  ${FILE_LOG_TEMPORARY} 2>&1 | tee -a ${FILE_LOG_GLOBAL}

But lot's of monochrome '&'s, '>'s and 'tee's that are enganged in some sort of high speed rain dance in front of my eyes are keeping me from seeing the solution right now. :sunglasses:

And yes, simply copying the file after exec finished is Plan B...
Plan A is to have (many) ${command}s log into a global file (for tail -f'ing) and single files for run control/archiving.

To make matters worse i need to run the script on Solaris(ksh88, which wont accept my "solution" above so far), SuSE (ksh93) and Redhat (ksh93).

Cheers and thanks for all hints!

Michael

Pipelines only connect stdout to stdin of the next command and do nothing for stderr or other file descriptors. So tee will only receive stdout of the first command and fiddling with file descriptors can only be used to combine them into stdout if you want their information to be used on the other side of the pipe.

Shell Command Language

1 Like

For Solaris 10 onward you have bash as part of the standard distribution. ksh88 is a good shell but bash has some better features for this kind of project. SuSe and RH both use bash by default. ksh on those two machines is really dash - dash is supposed to be 98% ksh compliant - but in fact you are likely introducing another level unneeded complexity. Please consider the project in a single shell seriously.

Following on Scrutinizer's good answer:

What you really should consider is some sort of job scheduler that can track a bunch of processes and their outputs for you. IMO, you are going to have to tinker with this setup from now on into the future. So, if you need an ongoing hobby this is one way to create one.

If you let us know your OS versions maybe we can make some suggestions. Also software like nagios can keep track of many multiple logs simultaneously. Without all the extra file fiddling you are working on.

1 Like

The >> redirects descriptor 1 to the file, so there is nothing left for the following | tee
I think you want

(( exec ${command} ${command_parameters} 3>&1 1>&2  2>&3 ) | tee -a ${FILE_LOG_TEMPORARY}.stderr ) 2>&1 | tee -a  ${FILE_LOG_TEMPORARY} >> ${FILE_LOG_GLOBAL}

BTW you do not need the total swap of descriptors 1 and 2; you can use (and define) descriptor 3 outside the ( ) brackets. Left as an exercise.

1 Like

For instance can a simple example help :

exec ${command} ${command_parameters} 1>> my._$(date +%s)_$$.log 2>> my._$(date +%s)_$$.err

Can you specify your problem to more detail ?
What do you want to achieve by redirecting all the output to log file(s) ?
Is exec really required ?

Hope the helps
Regards
Peasant.

1 Like

Do I get it right: you want three files, one for stdout, one for stderr, and one to capture both in one single file? Would this do what you want:

{ { ls -la file fie | tee log1; } 2>&1 >&3 | tee log2 >&3; } 3>log3
cat log*
log1:
-rw-rw-r-- 1 user group 108 Dez 27 15:26 file
log2:
ls: cannot access 'fie': No such file or directory
log3:
ls: cannot access 'fie': No such file or directory
-rw-rw-r-- 1 user group 108 Dez 27 15:26 file
1 Like

Hi Folks

Thanks all for the quick replies and sorry for my late response. Healthwise 2018 isn't my year exactly....

Decided to follow RudiCs suggestion and switch to bash which works fine.

What we originally wanted to do : Put cronjobs into a wrapper script and use redirected outputs for centnalized logging, archiving, alarming

Details :

  1. log : stderr only
    Icinga (Nagios) will check for stderr log files. Log files individual for each cronjob.

  2. log : stdout & stderr
    Additionaly we'll create a local central log file - independet of what the individual cronjob logs or rather does not log.
    All cronjobs of a server will log into that file, simultaniously.
    As we got a lots of different cronjobs that (may) produce, or rather **** into stdout/stderr, that file will certainly become messy.
    It's rather just for tail -f'ing if you're on the system and want to look at things going on.

  3. log : stdout & stderr
    We'll also archive sdtout & stderr - per single cronjob.

Wrapped crontab entry wil look like this

  • 1 * * * WrapperScript.bash UniqueJobID Timeout OriginalCommand OriginalParameters

And yes, we looked at already available solutions but didn't find anything that wasn't over the top in size, complexity or cost for our limited purpose.

However, if you plan on doing something similar start here :sunglasses: :
Job scheduler - Wikipedia

Thanks again all for your help!

1 Like