how to extract a range of lines from a file

I am reading a file that contains over 5000 lines and I want to assign it to a shell variable array (which has a restriction of 1024 rows). I had an idea that if I could grab 1000 record hunks of the file, and pipe the records out, that I could perform a loop until I got to the end and process 1000 records at a time. For example

lines 1-1000
lines 1001-2000
lines 2001-3000
etc...

Does anyone know of a unix command that will allow the user to return a range of lines from a file to standard output?
The command that I was trying was.

set -A CustNo `cut -f1-19 -d',' -s RAGEFF.lst|sed 's/,/ /g'|sed 's/\L//g'|nawk '{ if (NF == 19) {print $1}}' `

Any help would be appreciated.

Never mind. with different search parameters, I found the answer.
use a combination of head and tail will return any range you want. Sorry to bug the forum, I should have looked harder first.

For example to get lines 1001-2000 your would issue the following command.

set -A CustNo `cat RAGEFF.lst|head -2000|tail -1000|cut -f1-19 -d',' -s |sed 's/,/ /g'|sed 's/\L//g'|nawk '{ if (NF == 19) {print $1}}' `

You could also pipe though sed to extract the range - e.g.

blah | sed -n '1,1000p' | blah

to extract the first 1000 lines of the file

cheers
ZB

I like the sed solution better then head and tail.

Thanks.

Since you are already using awk, you can use it to specify a range....

nawk -F, 'NR>1000 && NR<=2000 && NF==19 {print $1}'

Even better!!

Thank you very much