Find lines in text file with certain data in first field

Hi all,

Sorry for the title, I was unsure how to word my issue. I'll get right to the issue. In my text file, I need to find all lines with the same data in the first field. Then I need to create a file with the matching lines merged into one. So my original file will look something like this:

a1 test1
a1 test2
b2 test1
b2 test2
a2 test1
a2 test2
b1 test1
b1 test2
...

And I want to output a new file to look like this:

a1 test1, test2
b2 test1, test2
a2 test1, test2
b1 test1, test2
...

Please help!!

if you have Python,

#!/usr/bin/env python
d={}
for line in open("file"):
    line = line.strip().split()
    d.setdefault(line[0],[])
    d[line[0]].append(line[-1])
for i,j in d.iteritems():
    print i,','.join(j)

output

# ./test.py
a1 test1,test2
a2 test1,test2
b1 test1,test2
b2 test1,test2

Given your sample you may try something like this:
(use gawk, nawk or /usr/xpg4/bin/awk on Solaris)

awk 'END { print f, r[f] }
f && $1 != f { print f, r[f] } 
{ r[f = $1] = r[$1] ? r[$1] ", " $2 : $2 }
' infile

Or, if order doesn't matter, here's a shorter version:

awk '{r[$1]=r[$1] ", " $2} END {for (f in r) print f, substr(r[f], 3)}' infile

If order does matter, you can always pipe the result through to 'sort'.

Actually,
the array in my previous post is not needed:

awk 'END { print r } 
f && $1 != f { print r; r = x } 
{ r = r ? r ", " $2 : (f = $1) FS $2 }
' infile