I have a file which is written by an ongoing process (actually it is a logfile). I want to trim this logfile with a script and save the trimmed portion to another file or display it to <stdout> (it gets picked up there by another process).
Fortunately my logfile has a known format (XML) and i can find the end of a logical paragraph (a certain closing tag) so i know up to which line i have to trim to. The problem i am facing is that the logfile is written more or less permanently (at the rate of ~10k lines per day) and i want to reduce the portion which might eventually be lost during the trimming to the absolute minimum. I am well aware that i cannot achieve the optimum of losing no output at all without job control (which i don't have) but i want to get as close as it is possible. This is what i have come up so far ("</af>" is the closing tag i am anchoring at):
# get nr of lines in log
iNumLines=$(wc -l $fLog | sed 's/^ *//' | cut -d' ' -f1)
chActLine=""
(( iNumLines += 1 ))
while [ "$chActLine" != "</af>" ] ; do
(( iNumLines -= 1 ))
chActLine="$(sed -n "${iNumLines}p" $fLog)"
done
sed -n "1,${iNumLines}p" $fLog # output to <stdout>
sed "1,${iNumLines}d" $fLog > $chTmpDir/log # remove printed lines
cat $chTmpDir/log > $fLog # overwrite with shortened
# version
While this is generally doing what i want i'd like to ask if there might be a way to further reduce the risk of lost lines as i perceive to be there between line 11 and 12 of the script snippet.
I don't know how fast your machine is, but you are dealing with several processes here, so those will definitely take up some time. If I understand this, we want to minimize real time to avoid loss of data. You didn't mention how large the file was. If it's really large, this might not work.
Memory is obviously faster than disk, so I suggest creating a perl script to slurp in the file, perhaps have several subroutines (if you like modularity) to take the place of the seds, etc., and write out the results. Even if you copy the perl lists a few times internally, that's still a real-time savings over disk access.
Even if you did this and it turned out not to be the final answer, there might be some parts of the perl script that would be useful, and just doing the script might suggest other alternatives ... cheers, drl
The machine is a LPAR in a IBM p570 with 2 physical CPUs (4 LCPUs). The quantity structure is as follows:
~10k lines per day
~4MB per day
The file is the garbage collector log of a JVM (the machine is running some Websphere 6.1 application servers) and the logfile is in XML format. That means, the lines are not written in constant intervals, but always a bunch of lines (one "paragraph", so to say) at a time. The information units i want to separate. are each starting with a "<af>" tag and ending with with a "</af>" tag.
Not at all! perl is definitely way slower than sed, at about a factor 10. I came to this conclusion when working on my last project, where i had to change database dumps (frighteningly huge files) and replaced the perl programs doing it with sed - that sped up the process greatly.
As i see it the critical part is only between lines 11 and 12 of the code snippet. All the previous operations are working from line 1 up to some predetermined line x of the file and it won't hurt of there come additional lines in during this time.
As an additional requirement i have to preserve the inode of the file, because the process which writes to it (the garbage collector of the JVM) will continue to write into it. This is why i used "cat > ..." instead of "mv ...".
Interesting problem. I think it will be quite difficult to handle additional lines in such a fast updating file using sed - after all, sed effectively stores the lines in your file in a buffer and operates on this using pattern & hold space. How can it keep track of new stuff coming in?
You will have to work with something which can seek to the point till which you want to archive & remove that part - DIRECTLY on the ever-changing logfile.
What you can do is reduce the window between these two:
sed -n "1,${iNumLines}p" $fLog # output to <stdout>
sed "1,${iNumLines}d" $fLog > $chTmpDir/log # remove printed lines
And do it in one shot:
$ cat data
1 file:10:no:1011
2 file:10:file:1011
3 data:10:say:1011
4 data:10:data:1011
5 file:10:file:1011
6 file:10:file:1011
7 file:10:file:1011
8 file:10:file:1011
9 file:10:file:1011
10 file:10:file:1011
11 file:10:file:1011
12 file:10:file:1011
13 file:10:file:1011
14 file:10:file:1011
15 file:10:file:1011
16 file:10:file:1011
17 file:10:file:1011
18 file:10:file:1011
19 file:10:file:1011
20 file:10:file:1011
21 data:10:say:1011
$ cat sedscr
#!/usr/bin/ksh
iNumLines=10
sed -n "
# Get the lines to be archived and put on stdout
1,$iNumLines p
# Write the rest of (trimmed) data to temporary file. This file can be used to overwrite data.
$((iNumLines+1)),\$ w data.trimmed
" data
$ sedscr
1 file:10:no:1011
2 file:10:file:1011
3 data:10:say:1011
4 data:10:data:1011
5 file:10:file:1011
6 file:10:file:1011
7 file:10:file:1011
8 file:10:file:1011
9 file:10:file:1011
10 file:10:file:1011
$ cat data.trimmed
11 file:10:file:1011
12 file:10:file:1011
13 file:10:file:1011
14 file:10:file:1011
15 file:10:file:1011
16 file:10:file:1011
17 file:10:file:1011
18 file:10:file:1011
19 file:10:file:1011
20 file:10:file:1011
21 data:10:say:1011
I toyed with the idea of writing directly to data instead of data.trimmed but that obviously doesn't help since the additional lines which came in after sed loaded the file in its buffer will be lost. Basically you can't use sed, ed etc. which operate on a "copy" of the file.