efficiently split a 2GB text file into two

Can an expert kindly write an efficient Linux ksh script that will split a large 2 GB text file into two?
Here is a couple of sample record from that text file:
"field1","field2","field3",11,22,33,44
"TG","field2b","field3b",1,2,3,4

The above rows are delimited by commas.

This script is to search the first field for the word "TG". If that row is present, it is to load that row to a TG.txt file. If this first field is not "TG", then it is to load that row into a NoTG.txt file.

So the result is a new TG.txt with the following row:
"TG","field2b","field3b",1,2,3,4

and a new NoTG.txt with the following row:
"field1","field2","field3",11,22,33,44

Thanks in advance. This forum rocks - with lots of helpful heroes!!
:smiley:

awk '{ if (index($0,"\"TG\",")==1) {print > "TG.txt" } else {print > "NoTG.txt"} }' bigfile

Thanks a million!! That was so simple and elegant. So sorry to take up time from your busy schedule. I do pay back by answering questions in other VBA/ Essbase forums where I have greater expertise.

Pushing my luck with one last question:
Can you expand on this KSH (kshell) script to exclude rows that have blanks instead of numbers? For example, exclude rows looking like this:

"TG","field2c","field3c",,,,
"TG","field2b","field3b",,,,

where the last four fields are blanks.
Thanks in advance.