split files by specifying a string (bash shell)

Hi all,

I have a file of around 300 lines in which string "SERVER" occurs around 32 times.

for eg.

I need to split files like, for eg

I am using this code
awk '/SERVER/{n++}{print > f n}' f=/vikas/list /vikas/final

But the problem is that it makes maximum of 10 files, but I need more than 30.
I have tried using nawk, but didnt worked.
I am using bash scripting on Sun OS.

Any other way of splitting this data ???

Pls help !!!

Thanks in adv.
Regards,
Vikas

awk '/SERVER/{n++}{output = f n; print > output; close(output) }' f=/vikas/list /vikas/final

Thanks vgersh99,
will get back to you after trying this command.

hi,
this cmd is making as many blank files as the string SERVER is there in the final file. :frowning:

pls help

awk '/SERVER/{if (n) close(output); output= f ++n} n {print >> output }' f=/vikas/list /vikas/final

hey,
This didnt made any file, not even a blank one.
Thanks.

hey, this command worked.
Thanks a ton friend. Its running on my Linux machine smoothly.
Now I have to check it on Solaris machine, will do that tomorrow.
Thanks again.

Sorry 'bout that - some awk's increment pre-increment ops don't seem to work the way they supposed to.

awk '/SERVER/{if (n++) close(output); output= f n} n {print >> output }' f=/vikas/list /vikas/final

On Solaris - use 'nawk' or '/usr/xpg4/bin/awk'

nawk '/^SERVER/ { 
	close(f); f = "/vikas/list" ++c 
	} 
{ 
	print > f 
}' /vikas/final

hello friend,

this command is only making two files list1 & list2.

Thanks.

works fine with 3 (and more) 'SERVER' lines...

Hi,

I tried lots of commands including the above ones. They all run fine on Linux machines
BUT not on the solaris machines, dont know the reason behind it.

Anyways, MANY MANY THANKS to all for your time and help. I found this command to work perfectly.

/usr/xpg4/bin/awk '/SERVER/{n++}{print > f n}' f=/vikas/list /vikas/final

Thanks again.

Can anyone pleasee explain me how the command is interpreted. Thanks alot in advance