Can I split a 10GB file into 1 GB sizes using my repeating data pattern

I'm not a unix guy so excuses my ignorance... I'm the database ETL guy.

I'm trying to be proactive and devise a plan B for a ETL process where I expect a file 10X larger than what I process daily for a recast job. The ETL may handle it but I just don't know.

This file may need to be split and we don't want to lose related data. I assume it would be easier to do it at the unix level rather than the etl tool providing there are no limitations to file sizes with the unix commands.

The file will most likely be 10GB +- a few GB. It is unknown at this time

The basic file format is as follows with the first 3 characters being the record type (100,401,404,410,411)

The file must be split into segments equal to a daily run approximately 1gb in size and it has to occur just before a 100 record as all the rows that follow a 100 belong together.

1001104vvbvnbvd
4011104ghghghgh
404111kjdkfjkdf
404111kjdkfjkdf
404111kjdkfjkdf
404111kjdkfjkdf
4103445kkjkljlk
4103445kkjkljlk
4113445kkjkljlk
4043445kkjkljlk
10011ffgfgg1250
4011104fffhghgh
404111kjddfjkdf
404111kjdkrtrdf
etc...

thanks in advance. I think we use HP-UX

When posting code, data or logs use CODE-tags for better readability and to keep formatting(indention) etc., ty.

$> awk '/^100/ {z++; print $0 >> "file_"z; next} {print >> "file_"z}' z=0 infile
$> cat file_1
1001104vvbvnbvd
4011104ghghghgh
404111kjdkfjkdf
404111kjdkfjkdf
404111kjdkfjkdf
404111kjdkfjkdf
4103445kkjkljlk
4103445kkjkljlk
4113445kkjkljlk
4043445kkjkljlk
$> cat file_2
10011ffgfgg1250
4011104fffhghgh
404111kjddfjkdf
404111kjdkrtrdf

Generally for splitting files just by size you can use the command "split" if it is available on your OS.

Thanks for you reply.

That would work if I wanted a million tiny files one for each record segment.

I would like to take the first million rows and cut it just like your script did and build 10 files from 1 giant file.

I could easily split a file into equal portions. bu trhe split cannot occur in a middle of a transaction.

Could I spool 1 milion rows then split.... Spool the next million... split... etc..etc..

nawk '
   !FNR%chunk {limit=1}
   /^100/ {cut=1}
   FNR==1 || (limit && cut) {close(out);out=FILENAME "_" ++cnt;limit=cut=0}
   { print >> out }' chunk=100000 myHugeFile