creating a csv file from this 1 liner?

I'm trying to create a csv file by running awk and sed on a number of xml files in a directory; I'm using this below:

hostname; grep "BuildDate" /dir/ABCD/configuration/*/*.xml | awk -F"/" '{ print $5 }' > /tmp/tempfile.txt; grep "BuildDate" /dir/ABCD/configuration/*/*.xml | awk -F\" '{ print $2 }' >> /tmp/tempfile.txt; sed -n 's/.*<Long>\([^<]*\)<\/Long>.*/\1/p' /dir/ABCD/configuration/*/*.xml >> /temp/tempfile.txt

The issue I have is that I can't write and deploy a shell script onto the box I'm running the above commands on (not allowed basically - insane I know) so I'm having to run the whole of the above as a 1 liner in the terminal when logging onto the box by seperating commands with ';'. This is returning the info I want in the following format:

hostname
SERVER_1
SERVER_2
SERVER_3
1.5.019.01
1.5.019.02
1.5.016.03
This is a description of server1
This is a description of server2
This is a description of server3

but I need to generate a csv file in the following format:

hostname,SERVER_1,1.5.019.01,This is a description of server1
hostname,SERVER_2,1.5.019.01,This is a description of server2
hostname,SERVER_3,1.5.016.03,This is a description of server3

is there a way to get the above format by extending the what I have already?

Well... you've left your intermediate output pretty typeless... so it's NOT hard to write an awk or something to parse your intermediate output and take it to CSV, BUT, you have to assume that the number of SERVER_* entries is always 3. My guess is that there might be some type data that can pass through.... possible to see the actual data sources? Just trying to help this to handle a variable number of data instead of just 3.

If you show us a reasonable example of your XML files, I can probably provide you a simple XSLT stylesheet that will generate a CSV file.