I get a file which has all its content in a single row.
The file contains xml data containing 3000 records, but all in a single row, making it difficult for Unix to Process the file.
I decided to insert a new line character at all occurrences of a particular string in this file (say replacing "<record>" with "\n<record>", so that the file has multiple rows without impacting the xml data in it.
I have tried sed, awk and perl commands but probably these commands aren't processing the file with too long line in it.
I cannot 'fold' the file as it breaks the file at a fixed width, disrupting the xml tags and xml data.
1) What is your file size ?
2) Is you sed/awk implementation version compiled for 64bit or 32bit plateform ?
(What gives the command file <yourfile> )
3) Which sed command did you try ?
Did you try something like :
cat infile | sed 's/</#</g' | tr '#' '\n'
(this UUOC is just for test purpose to see if sed can better handle it as a stream than as a file)
(or choose another character than the hash # , choose one so that it doesn't appear in your original file)