Combine a datafile with Master datafile, emergent!

Hi guys, my supervisor has asked me to solve the problem in 7 days, I've taken 3 days to think about it but couldn't figure out any idea.
Please give me some thoughts with the following problem,

I have index.database that has only index date:
1994
1995
1996
1997
1998
1999

I have small.database.csv that contains data for some of the indexed dates but not all of them:

1995, california, A3,B6
1999, vermont, A4,B9

I want to match the small.database.csv into index.database into a combined.database.csv so it would look like:

1994,,,
1995, california, A3,B6
1996,,,
1997,,,
1998,,,
1999, vermont, A4,B9

shell scripts or perl would both be fine

Thanks a lot.
My supervisor is after me on this one.

Try...

$ head file?
==> file1 <==
1994
1995
1996
1997
1998
1999

==> file2 <==
1995, california, A3,B6
1999, vermont, A4,B9
$ join -t , -a 1 -o 1.1,2.2,2.3,2.4 file1 file2
1994,,,
1995, california, A3,B6
1996,,,
1997,,,
1998,,,
1999, vermont, A4,B9
$

It didn't work!
The join command requires two files to be sorted according the index field.
What I have as index field is a date
07/08/1998
Join can't figure it out on its own., all it sees is 07

Please help.

If you can use Python, here's an alternative:

#!/usr/bin/python
flag=0
for line in open("file1"):
    line = line.strip()
    for line2 in open("file2"):
        if line2.split(",")[0] == line:
            print line2.strip()
            flag=1
    if flag: 
        flag = 0
        continue
    else: print "%s,,," % line

output:

# ./test.py
1994,,,
1995, california, A3,B6
1996,,,
1997,,,
1998,,,
1999, vermont, A4,B9

Thanks a lot ghostdog74. It works!
But it's really slow for large data files.
Join is surprisingly much faster in managing large files, only join couldn't work in this case.

#! /opt/third-party/bin/perl

open(FILE, "<", "small") || die "Unable to open file small <$!>\n";

while(<FILE>) {
  chomp;
  $fileHash{$_} = $i++;
}

close(FILE);

open(FILE, "<", "index") || die "Unable to open file index <$!>\n";

while(<FILE>) {
  chomp;
  $set = 0;
  foreach my $v ( sort keys %fileHash ) {
    if ( $v =~ m/^$_/ ) {
      print $v . "\n";
      $set = 1;
      last;
    }
  }
  print "$_,,,\n" if ( $set == 0 );
}

close(FILE);

exit 0

This should be fast !

how about this:

 awk -F "," 'NR==FNR { 
			for(i=2;i<=NF;i++) a=a","$i 
			arr[$1]=a ; a="";next
		     }
		     {
		        length(arr[$1]) <=0 ? s = $1",,," : s = $1 "" arr[$1]
			print s	
		     } ' file2 file1   

output:

./test.sh
1994,,,
1995, california, A3,B6
1996,,,
1997,,,
1998,,,
1999, vermont, A4,B9