Combine multiple unique lines from event log text file into one line, use PERL or AWK?

I can't decide if I should use AWK or PERL after pouring over these forums for hours today I decided I'd post something and see if I couldn't get some advice.

I've got a text file full of hundreds of events in this format:

Record Number : 1
Records in Seq : 1
Offset in Seq : 1
Time : DD/MM/YY 00:00:00
Vendor ID : XXXXXXX
Application ID : XXXXX
Application Version : XXXXX
API Library : XXX
API Version : XXXX
Host Name : HOST1
OS Name : UNIX
OS Revision : Ver2
Client Host :
Process ID : 00000000
Task ID : 00000000
Function Class : N/A
Action Code : CODE
Text : Any length of description could potentially go here
Initialization type : Any length of description here also
Username : USERID
Activity ID : ACTIVITYID

Each one of these events is 20 lines long with the last line being activity ID. Then a blank space, and the start of a new event.

I want to do the following:
1) Open a text file. Remove the first 5 lines as they contain information that is not needed.
2) Combine all 20 lines of the event into one string, with a delimiter of '^' between each header. I chose the '^' because thats the only special character I could find that WASN'T in this file that was unique. Ideally it would look like this after being collapsed:

Record Number : 1 ^ Records in Seq : 1 ^Offset in Seq : 1 ^ Time : DD/MM/YY 00:00:00 ^ Vendor ID : XXXXXXX ^Application ID : XXXXX ^ Application Version : XXXXX ^ API Library : XXX ^ API Version : XXXX ^ Host Name : HOST1 ^ OS Name : UNIX etc etc

2) Once it gets to the end of one event, I want it to loop, read the next 20 lines as a new event to collapse and proceed through the entire txt file that way.
3) It would be nice to get rid of the spaces after the ':' but not necessary.
4) If I can get that far I then have to set this up so I only pull the most recent events from the file (i.e. That days records only)

I am new to both Perl and AWK.....more experience with Python. I have poured over these forums, and tutorial pages and can find many examples on how to collapse lines but they lack detail as to what each command does. I'm solution agnostic so whether its perl or awk as long as someone can explain all the little pieces of their code so I can LEARN it, understand it, and possibly modify it in the future! The more info the better! Thanks in advance....

The awk is more than adequate for the task. You can utilize ORS feature - output record separator, to re-place usual line feed with ^ character to string the log lines together. Another built-in NF - number of fields - will tell you when the new section is coming. Yet another built-in NR - record number - will help to bypass firts 5 lines.

As an example:

awk '{if(NR<=5)continue; if(NF>0){ ORS="^"}else{ ORS="\n"} print $0}'  your_log_file
tail +6 yourfile | awk '{$1=$1;gsub(" *: *",":")}1' RS= FS="\n" OFS="^"

(depending on your OS you may want to try : tail -n +6 yourfile | awk .... )

awk 'NR<2{sub(".*"$6,$6)}{$1=$1;gsub(" *: *",":")}1' RS= FS="\n" OFS="^" yourfile

A method to filter any number of items (and non-consecutive) using awk:

put the following in a file called myawk.awk (or whatever)

BEGIN {
    FS = " : ";
    Skip["Record Number"] = 1;
    Skip["Records in Seq"] = 1;
    # put whatever you want filtered out using the above format                                                    
}
{ if ($1 == "") { printf("\n") } else if (!($1 in Skip)) { printf("^%s:%s",$1,$2) } }
awk -f myawk.awk your_log_file