Parse apart strings of comma separated data with varying number of fields

I have a situation where I am reading a text file line-by-line. Those lines of data contain comma separated fields of data. However, each line can vary in the number of fields it can contain. What I need to do is parse apart each line and write each field of data found (left to right) into a file.

Example Lines of data
---------------------
abc,123,def,456,ghi,789
123,def,456
def,456,ghi,789,jkl
abc
def,456,ghi,789

After the first read the variable $LINE would equal "abc,123,def,456,ghi,789".
Then the fields of data should be parsed and written to FILE.TXT and should look like this :
abc
123
def
456
ghi
789

The second line is read and the variable $LINE would equal "123,def,456" and it's fields of data would be parsed and written to FILE.TXT and would look like so :
123
def
456

and so on for the remaining lines that are read into $LINE.

I have no problems with reading in the lines of data and placing each line into the $LINE variable. I need assistance with how to parse apart each line as it's read into $LINE and writing the fields to a file one-by-one from left to right. I'm pretty sure awk would probably be the best method but I'm open to suggestions on any more efficient methods/commands.

Can anyone help?

Something like this?

tr ',' '\n' < file > FILE.TXT

Regards

Try:

tr ',' '\n' < filename

or,

sed 's/,/\n/g' filename

sed -e 's/\,/^M/g' filename >newfilename

this will not work. ^M is the DOS LF character.

Sorry, but, maybe I wasn't clear enough in my original thread. tr won't work for me because I need to do processing on each string after it's been parsed and before reading of the next record. If I'm reading this right the tr command will read input from 'file' and write the results to FILE.TXT. This will not allow me to do other processing after parsing the line and writing to FILE.TXT. After each read and subsequent parsing of $LINE I am going to do other processing with the parsed data before reading the next record and parsing it. And so on....

Process Flow/Steps/Loop
------------------------
1) Read a record in $LINE (which I already handle via a while loop)
2) Parse $LINE and output fields to FILE.TXT (which is what I need help with)
3) Then I will do other processing using the field data written out to FILE.TXT (which I will handle)
4) Step 1, until end of file.

Step 2 is the only thing I need assistance with. I need to take what's in $LINE, no matter how many comma separated elements there may be, parse them out and write them, one at-a-time, to FILE.TXT. Once all elements have been written out for $LINE I will then use that data for other processing. Then I will read the next record into $LINE. And so on.

where I echo "$yf" you could do anything you need to do

> cat infile2
abc,123,def,456,ghi,789
123,def,456
def,456,ghi,789,jkl
abc
def,456,ghi,789

> cat 2break
#! /usr/bin/bash

while read zf
  do
  echo "$zf" | tr "," "\n" >infile2_t
  while read yf
    do
    echo "$yf"
  done<infile2_t
done <infile2

> 2break
abc
123
def
456
ghi
789
123
def
456
def
456
ghi
789
jkl
abc
def
456
ghi
789

I think I've got this one figured out thanks to your valuable input. Much thanks!

What I did, based on your input :

I read each record into $LINE and then do the following :

echo "$LINE" | tr ',' '\n' > FILE.TXT

Then all I do is read the FILE.TXT file for each of the individual elements/field data.

Thanks again for pointing me in the right direction.