Redirecting log files to null writing junk into log files

Redirecting log files to null writing junk into log files.

i have log files which created from below command

 exec <processname> >$logfile 

but when it reaches some size i am redirecting to null while process is running like

 >$logfile  

manually but after that it writes some junk into log file for first few lines. how to avoid that ?

I'm not sure I quote understand. Can you show the code block this is in, some sample input and the output you get?

When you exec a process, it will replace the current shell, so (depending on your OS and version) the redirect you have might be ignored. Can you post the output from uname -a to show your OS and the output from ps to show your shell.

Thanks, in advance,
Robin

HP-UX phxlevht B.11.31 U ia64 4230347391 unlimited-user license

output of uname -a

   PID TTY       TIME COMMAND
  1402 pts/11    0:00 ps
   347 pts/11    0:00 ksh

output of ps.
my script is

exec sedec >logfilepath 

the process is still running but the size is little huge so i manually redirected to null as below

>logfilepath

the process prints some garbage and then with proper text. it is not the process writing garbage i think.

Oh, now I see. Sorry, I was confused by your phrase "redirecting to null" You have tried to truncate the file separately. The problem you might have is that the file is still open by your running process, so might not actually free up space in the filesystem, although it should appear to become zero bytes.

What happens to the standard error from exec sedec >$logfile By default, that will be sent to the screen or whatever is standard output when the process begins, before your redirect takes effect. Could this be the output of you process complaining that the file has been reset/and or replaced?

What else do you have running from the same session? I would expect that you can issue >otherfile without a problem.

I hope that this helps,
Robin

1 Like

WHAT is the garbage? What is correct after that?
If I get you correctly, the truncation works correctly, and the old log file contents is lost.
Does the process keep the old file open? Check with e.g. lsof . Try sth. like echo "Marker 1" > $logfile .

1 Like

garbage is junk charactet like

 @@@@@@@@@@@@@

correct means it start printing the logs from the process.

i think yes the process keep on running so it still open.

I think you have already gotten very good advice about how to solve it but it might be helping you to understand what is going on:

When a process "opens" a file it calls some OS function (namely fopen() ) and part of this "opening" is that the OS sets up an environment through which the process can access the file. Part of this is to find out how big (=how many bytes) a file is. The process also gets a "place" where it "stands" right now. This "place" can be moved forward, backwards, etc., but only within the limits of the length of the file.

Say, a program opens a file and is told that the file is 10 bytes long. Right now it "stands" on byte 1 and it can read it, which would move the place it stands forward to byte 2, etc.. It can also do things like "go forward 3 bytes and then read (or write) 2 bytes from there". It can also add to the file, which would increase the size so that now it can position its place to byte 11. But if it tries to do something impossible (like "go to byte number <behind the current length>" it would receive an error because the OS "knows" that the file is only as long as it is.

All this works well as long as one process accesses a file. But in your case a process opened a file and wrote lots of bytes into it, making its length some big number in the "internal bookkeeping" of the OS. Now a second process (your shell command) truncated the file and but for the first program it is still as long as it was when it last added to it. If it tries to read something from further up (like when it tries to print the content) of course it will get garbage because what it reads is some random block on a disk which is not part of the file any more - but the program won't know that.

Log writing processes should therefore NOT write into log files continuously but open and close the log for every write action separately.

I hope this helps.

bakunin

1 Like

got it thanks