Deleting lines from a file

How I can delete 100 lines anywhere in a file without opening a file and without renaming the file.

I could think of this:

sed -n "${to},${from}p" file_to_cut_lines_from | cat > file_to_cut_lines_from

You can use this to get a block of lines from a file, deleting the rest of the lines (because of the >).

This works in ksh, but is not portable in sh and bash. This has to do with the way ksh treats the '>'. There is a post where Perderabo has given an explaination for this - but I cant seem to remember where that post is...

Actually I have 8GB file which contains 200,000 records and I want to delete first 50,000 records without opening the file. At the same time I don't want to create another file once I delete the records because of space problem on server.

Is there any way I can delete 50,000 lines from 'abc.dat' and redirect output in the 'abc.dat' only.

ed can do this but it does create a temp file.

Records are potentially very different from lines. You could have your own delimiters and have a record spanning several lines. If each line is a single record, then you could still run the following command under ksh:

sed -n "50000,200000p" file_to_cut_lines_from | cat > file_to_cut_lines_from

Though I really dont know if you can do this with such a large file. Even if it works, it will take a really long time to get through the file.

It's not working....it's deleting the whole file..

Are you running this under ksh? It will *only* work under ksh and *not* under bash and sh.

No I am running this under sh . I will try this out with ksh..thanks for your help...still suggest me the better way to do this if you think any...truly thanks in advance.

perl -pi -e '$_ = "" if ($. >= <start> && $. <= <end> );'

Where <start> is the first line you want to delete and <end> is the last line you want to delete.

eg:

perl -pi -e '$_ = "" if ($. >= 2 && $. <= 4 );' file

deletes lines 2 to 4.

Thank you very much reborg. I wonder if we can do this by using awk...I don't know really.

Yes, but not if you have very long lines, or without opening a nother file. Also perl will be MUCH faster.

we can also delete records using
sed 'n,xd' inputfile > outputfile
where n would starting range x would be ending range