Playing with Volume of data

Quick problem statement:
How to read/extract data from a big-big file.

Details:
We are having a big big problemo in the work we are working at. We are using solaris plarform E25.
There is a very big file created somewhere around 200 million records anad the lenght of each record is more than 1000 columns. The data in these columns is separated by semicolon.

Sample File:
01;1;;0001;123;;;;ZBCA10;;;;;;;;;20060116;99991
 ;/;/;/;123;/;/;/;/;/;/;/;/;/;/;/;/;/;/;/;/;/;
 ;/;/;/;123;/;/;/;/;/;/;/;/;/;/;/;/;/;/;/;/;/;/
 ;/;/;/;123;/;/;/;/;/;/;/;/;/;/;/;/;/;/;/;/;/;/
01;1;;0001;421876;;;;BCA030;BCA010;;;;;;;;20060502
 ;/;/;/;421876;/;/;/;/;/;/;/;/;/;/;/;/;/;/;/;/;/;/
 ;/;/;/;421876;/;/;/;/;/;/;/;/;/;/;/;/;/;/;/;/;/;/
 ;/;/;/;421876;/;/;/;/;/;/;/;/;/;/;/;/;/;/;/;/;/;/
01;1;;0001;42187;;;;BCA030;BCA010;;;;;;;;20060502
01;1;;0001;4216;;;;BCA030;BCA010;;;;;;;;20060502
01;1;;0001;4876;;;;BCA030;BCA010;;;;;;;;20060502
01;1;;0001;21876;;;;BCA030;BCA010;;;;;;;;20060502
01;1;;0001;876;;;;BCA030;BCA010;;;;;;;;20060502
 ;/;/;/;876;/;/;/;/;/;/;/;/;/;/;/;/;/;/;/;/;/;/
 ;/;/;/;876;/;/;/;/;/;/;/;/;/;/;/;/;/;/;/;/;/;/
 ;/;/;/;876;/;/;/;/;/;/;/;/;/;/;/;/;/;/;/;/;/;/
 ;/;/;/;876;/;/;/;/;/;/;/;/;/;/;/;/;/;/;/;/;/;/
 ;/;/;/;876;/;/;/;/;/;/;/;/;/;/;/;/;/;/;/;/;/;/
 ;/;/;/;876;/;/;/;/;/;/;/;/;/;/;/;/;/;/;/;/;/;/
 ;/;/;/;876;/;/;/;/;/;/;/;/;/;/;/;/;/;/;/;/;/;/
 ;/;/;/;876;/;/;/;/;/;/;/;/;/;/;/;/;/;/;/;/;/;/
 ;/;/;/;876;/;/;/;/;/;/;/;/;/;/;/;/;/;/;/;/;/;/

About file:
the start of the file always starts with a 01 or a " " space. The records are more like header-details. the row starting with 01 - implies a header line. the ones below it are details. Data columns are separated by semicolon. Data Column 5 will remain same between header and detail rows.
What I need to do:
1) extract a bunch of header-detail records. For example I need to every 50000 header-detail kind of data rows into a file.

I tired the SED command
sed -n 100,200002p -f fileabc.txt
However the performance is not upto the mark. the problem with sed is that even if it is instructed to copy only 100 to 200002 rows, it still scans the entire file.
When I tried SED with the entire file it took couple of days to run. Thats too much. I need better options?
Is there a way to make this operation run in parallel?
Is there a shell command which helps to copy only the specific rows and down not scan the entire file?

2) I will let you know the second question later.

Thanks,
darshanw

q in sed quits early.
awk may be easier

awk 'FNR>2 && FNR<200003 {print $0}
       FNR>200002 { exit}' bigfile > smallerfile

Maybe a dumb question:
The script which is using SED is inside another shell script, how to pass the "q" so that SED stops after the relevant line?

Secondly, I tried the AWK as well, but it doesn't print anything. Any clues why so?
regards,

Stumbled and dubmled --- I found a solution
# Usage of awk for file creation
awk 'NR>='"$1"' && NR<='"$2"' {print; } NR>'"$2"' {exit}' $3 >> $4
echo "` date "+%F_%T" | tr -d ' '` DD-File created --> " $4 >> $5

Now I can pass my regular shell script parameters to teh awk script as well...