Pulling out fields from a file

Hi,
I have a file that contains 1400 lines similar to the one shown below:

NAME=sara, TOWN=southampton, POSTCODE=SO18777, EMAIL=sara@hotmail.com, PASSWORD=asjdflkjds etc etc (note: this is one line).

Each line has the same fields, but on each line they are in a different order. Eg. the line beneath the one shown above is:

TOWN=southampton, PASSWORD=asjdflkjds, NAME=sara, EMAIL=sara@hotmail.com, POSTCODE=SO18777

I want to be able to pull out only the POSTCODE and NAME fields (including everything up to the comma) from this file and place it into another.

However, using sed/awk I have not been able to do this.

Any ideas?

One method:

perl -ne 'if /(POSTCODE=.?)(,|$)/ {print $1." ";} if/(NAME=.?)(,|$)/ {print $1."\n";}' <I>filename</I>

Don't forget Perl's motto: TIMTOWTDI (There is more than one way to do it)

Another method, 3 commands, a little bit longer, but easier to understand. Here I assume there are 5 fields in the source file,

(awk -F "," '{print FNR, $1}' filename ; awk -F "," '{print FNR, $2}' filename ; awk -F "," '{print FNR, $3}' filename ; awk -F "," '{print FNR, $4}' filename ; awk -F "," '{print FNR, $5}' filename) | grep "NAME=" | sort > tab1

(awk -F "," '{print FNR, $1}' filename ; awk -F "," '{print FNR, $2}' filename ; awk -F "," '{print FNR, $3}' filename ; awk -F "," '{print FNR, $4}' filename ; awk -F "," '{print FNR, $5}' filename) | grep "POSTCODE=" | sort > tab2

join tab1 tab2

"awk" outputs the fields with line number (record number), then we "join" them together by line numebr. Just like we make a query / joint between 2 tables, "select tab1.NAME, tab2.POSTCODE from tab1, tab2 where tab1.linenumber=tab2.linenumber"
:wink: