sed or awk to convert text files with recurring headings to rows and colum

I have many text file reports generated by a Information Assurance tool that I need to get into a .CSV format or Excel tab delimited format. I want to use sed or awk to grab all the information in the sample text file below and create column headings:Risk ID, Risk Level, Category, Description, How To Fix, Affected Machines, Then I want to populate the cells(or fields?) with the data seen below. I think I can do this by grabbing everything between 2 of the heading strings, e.g: grab everything between Risk Level & Category: and add that to Risk Level field, then grab everything between Category and Description and add to Category value in Category column.

Problem ID: 3454
Risk Level: Low
Category: Accounts
Description: Local user account does not require a password.
How To Fix: To disable the allowance of blank password, change the following registry key: HKLM:System\CurrentControlSet\Control\Lsa\LimitBlankPasswordUse Set value to DWORD=1
Affected Machines: Student001 student0057 student6745.

Problem ID: 433
Risk Level: Med
Category: DoS
Description: Microsoft SynAttackProtect
How To Fix: Set: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters\SynAttackProtect registry value to 1 (better protection) or 2 (best protection).
Affected Machines: Student021 student0032 student67.
Problem ID: 3454

Risk Level: High
Category: Miscellaneous
Description: Apple QuickTime 7.1.6 Update - Windows
How To Fix: Update to version 7.1.6 or later of QuickTime.
Affected Machines: Student011 student00907 student45.

Here is where I need to be:

input:

name:leo
age:28

name:stt
age:24

code:

awk 'BEGIN{FS=":";n=1}
{
if ($1=="name")
	name[n]=$2
else
{
	age[n]=$2
	n=n+1
}
}
END{
print "NAME    AGE"
for (i=1;i<=n;i++)
print name"   "age
}' a

output:

NAME    AGE
leo   28

stt   24

I can't use colon: as FS because all the 'Descriptions' and 'How to Fix:' fields will be chock full of colons. Unless I somehow change the FS's to something else eccentric first e.g: do a find/replace on 'name:' to 'name*' change 'age:' to 'age*' then use * as FS. This is why I wanted to populate Category: field with all data between Category: and Description: strings, and same for other fields.

assuming all the fields appear in the SAME order for different records/blocks....
not sure if you wanted ONE output .csv file or ONE csv PER your input file (as you may have many) - this assumes ONE output file for ALL the input files.

nawk -f bj.awk myInputFile.txt > myReport.csv

bj.awk:

BEGIN {
  FS=RS=""
  qq=sprintf("%c", "\"")
}
NR==1{
  for(i=1; i<=NF; i++)
     printf("%c%s%c%c", qq, substr($i, 1, index($i, ":")-1), qq, (i==NF) ? "\n" : ",")
}
{
  for(i=1; i<=NF; i++)
     printf("%c%s%c%c", qq, substr($i, index($i, ":")+2), qq, (i==NF) ? "\n" : ",")
}

suppose if I have more than 2 fileds like name,age,total,avg then how the script changes ?

With this script I got the output.

Scipt:
------

awk '
/siteid/ { printf $3 " " };
/spc/ {printf $3 " "} ;
/rset/ {printf $3 "\n" };
' log

Output:
-------
HYD 100 del
DEL 200 hyd