As part of a quiz assigned during my unix class I was asked to write a program to ask for a file name, print read errors, and "reverse elements in a list."
I used the 'tac' command in my solution, however, I was then lectured for 5 min about the "limitations" of the 'tac' command and how a 'for' loop would have been more robust. During which I was also told such a command wouldn't work with a "one million" line text file.... is this true?, and if so is it really relevant issue?
$
$ # create a million-line file
$ perl -le 'for (1..1_000_000){print "this is line $_"}' >file_1mil.txt
$
$ wc file_1mil.txt
1000000 4000000 20888896 file_1mil.txt
$
$
$ time tac file_1mil.txt
this is line 1000000
this is line 999999
this is line 999998
this is line 999997
this is line 999996
this is line 999995
this is line 999994
this is line 999993
this is line 999992
this is line 999991
this is line 999990
...
...
...
this is line 10
this is line 9
this is line 8
this is line 7
this is line 6
this is line 5
this is line 4
this is line 3
this is line 2
this is line 1
real 4m9.095s
user 0m7.186s
sys 0m21.328s
$
$
It does work with a million lines and I think, is pretty fast.
My main concern would be that tac would read the whole file into memory before reversing it, which has obvious limits. I tried it with strace and found that GNU tac, at least, does no such thing. It seeks to EOF-8K and reads 8K chunks, seeking upwards in 8K jumps. It might have a line-size limit of 8K, but it can handle files of arbitrary size.
Of course, doing so requires seeking, which it can't do if the input is a pipe. In that case it creates a temporary file which it cat's the input into, then backward-reads through the temp file as described above. So for piped streams it may be limited by disk space in /tmp.
I think it's not an issue question but that you were asked to write a program and show what logic you would involve (in state of using the "out of the box" tac).
Don't you think so ?