Loop through file and write out lines to file(s)

Hi,

I am new to ksh scripting so your help will be much appreciated.

I have a file called file.txt which looks like this

Header 20050702 20050703
ABC
DEF
Header 20050703 20050704
123
456
Header 20050704 20050705
XXX
YYY

What I am trying to do is write out each of the record sets to its own file. For example, file1.txt will contain:
Header 20050702 20050703
ABC
DEF

and file2.txt will contain:

Header 20050703 20050704
123
456

etc.

What is the best way to approach this?

Thank you in advance.

nawk -f Jtrinh.awk file.txt

here's Jtrinh.awk:

FNR % 3 == 1 {
  if ( i ) close(out)
  out= "file" ++i ".txt"
}
{
  print >> out
}

Thanks vgersh99,

It works well, however, I left out an important piece of information, that is I don't know for sure how many sets of records there are in the file. How would I handle that?

Thanks again.

wouldn't the posted code work for any number of records?
or do you mean something else?
a sample [non=working] sample would help.

awk 'begin{of="file0.txt"}/^Header/{close(of);of="file"++i".txt";}{print>of}' file.txt

GNU Awk 3.1.3
Copyright (C) 1989, 1991-2003 Free Software Foundation.

  1. begin != BEGIN
  2. '>' != '>>'
  3. depending on the inital assumptions: is it the string 'Header' that segregate the records or is there some other record segregation logic
awk 'BEGIN{of="file0.txt"}/^Header/{close(of);of="file"++i".txt";}{print>of}' file.txt
  1. begin != BEGIN --- typo, Thank you!
  2. '>' != '>>' --- it is right '>' != '>>', but using '>' is better than '>>' in this thread.
  3. depending on the inital assumptions: is it the string 'Header' that segregate the records or is there some other record segregation logic --- agree, hope Jtrinh can talk more about the rule of the file.

Thanks to you both,

Both BEGIN and begin seem to work the same. But which one should I use?

'Header' does segregate the records. So the rule is every time the string 'Header' is encountered, create a new file.

Thank you again, your help makes my day.