That could be wasteful if the file is much larger than 100k lines, since it would still read the file in its entirety. Downstream, the pipeline won't see EOF until awk eventually exits.
Thank you very much! head -n100000 works like a charm.
---------- Post updated at 03:06 PM ---------- Previous update was at 09:13 AM ----------
ok new problem
i decided that i would like to preform an action between the lines 100k and 200k (for example). The easy way i tried out
head -n100 | tail -n200
did not worked and i'm guessing that tail reads the whole file first so that will be a bad idea (the file is really big). Next thing i'm thinking is using sed
something like
sed -n '100,200 p' /filelocation/file | grep "string im searching for"
but it's kinda slow when the grep is added. Any help would be greatly appricieted
thanks, i played a bit with this part and this is the part that bugs me - for lines let's say to the first million it works just fine, even above, but when i tried to print a single line
sed -n '177998637,177998638{;p;177998638q;}' /testdir/testfile
it took a lot of time (actually i didn't even waited for result) is that normal behaviour.
Yes. That command is reading your file sequentially till the 177998637th line and then printing this and the next line and then quitting the read operation. So you see, to print just those 2 lines, the command still has to read the first 177.99 million lines (have to say, a huge huge file).
For an improvement in the time taken, you could replace that sed command with awk: