deleting lines from multiple text files

I have a directory full of text data files.

Unfortunately I need to get rid of the 7th and 8th line from them all so that I can input them into a GIS application.

I've used an awk script to do one at a time but due to the sheer number of files I need some kind of loop mechanism to automate it.

The awk script used: -
awk 'BEGIN{getline f;getline t}FNR==f,FNR==t{next}1' numbers.txt inputfile > outputfile

where numbers.txt is merely a document with the numbers 7 and 8

My guess is that I need a way of piping the output from ls (of the directory) into where the inputfile is situated and a counter to loop through till the end.

Any suggestions will be welcome (an awk suggestion would be preferable to pearl)

Thanks all

sed or awk are good choices - here is sed:

for file in  `ls *.txtdatafile`
do
      sed '7,8d' $file > tmp.tmp
      mv tmp.tmp $file
done

Thanks for the Reply Jim

Unfortunately this sends them all into one big tmp.file. I need to create a new text file for every single text file that is in the directory.

Gack, please don't post the same question multiple times. I already replied once.
http://www.unix.com/unix-dummies-questions-answers/62666-deleting-specific-lines-all-files-directory.html\#post302189269

The second line in Jim's script (and mine, too) moves the tmp file back over the original file. You can't easily redirect back onto a file because the redirection happens before awk gets to read it, so the file ends up being empty. There are various ways to avoid using temporary files but in this particular case it's probably not worth the hassle.

The ls in backticks is an antipattern; see An example of dealing with file names with spaces in them for a discussion of the drawbacks. Simply use "for file in *.txtdatafile" instead.