I'm not a unix guy so excuses my ignorance... I'm the database ETL guy.
I'm trying to be proactive and devise a plan B for a ETL process where I expect a file 10X larger than what I process daily for a recast job. The ETL may handle it but I just don't know.
This file may need to be split and we don't want to lose related data. I assume it would be easier to do it at the unix level rather than the etl tool providing there are no limitations to file sizes with the unix commands.
The file will most likely be 10GB +- a few GB. It is unknown at this time
The basic file format is as follows with the first 3 characters being the record type (100,401,404,410,411)
The file must be split into segments equal to a daily run approximately 1gb in size and it has to occur just before a 100 record as all the rows that follow a 100 belong together.
1001104vvbvnbvd
4011104ghghghgh
404111kjdkfjkdf
404111kjdkfjkdf
404111kjdkfjkdf
404111kjdkfjkdf
4103445kkjkljlk
4103445kkjkljlk
4113445kkjkljlk
4043445kkjkljlk
10011ffgfgg1250
4011104fffhghgh
404111kjddfjkdf
404111kjdkrtrdf
etc...
thanks in advance. I think we use HP-UX