Need awk script for removing duplicate records

I have log file having Traffic line

2011-05-21 15:11:50.356599  TCP (6), length: 52) 10.10.10.1.3020 > 10.10.10.254.50404: 
2011-05-21 15:11:50.652739  TCP (6), length: 52) 10.10.10.254.50404 > 10.10.10.1.3020: 
2011-05-21 15:11:50.652558  TCP (6), length: 89) 10.10.10.1.3020 > 10.10.10.254.50404: 
2011-05-21 15:11:50.852325  TCP (6), length: 32) 10.10.10.1.3020 > 10.10.10.254.50404: 

the idea is to remove the lines that are repeated more than once , write how many times the line is repeated and the summation field length . I also want to arrange fields to have the following matches

2011-05-21 15:11:50.356599  TCP (6)  length 141 10.10.10.1  3020  >   10.10.10.254  50404   3
2011-05-21 15:11:50.652739  TCP (6)  length  52 10.10.10.254 50404 > 10.10.10.1  3020  1

I managed to get this result but it is not enough

awk '{x[substr ($0,28)]++;y[substr ($0,28)]=$2} END { for (i in x) printf "%s %d\n",yi,x}' file.txt

:wall:

15:11:50.356599  TCP (6),  length: 52 10.10.10.1.3020 > 10.10.10.254.50404   3
15:11:50.652739  TCP (6),  length: 52 10.10.10.254.50404 > 10.10.10.1.3020  1

I see length changing but no pattern, dot becomes space(s) or tab, record counting, time field saving/overwriting but I do not have your requirements.