You could also try the following...
Note that you say that your files are <tab> delimited, but all of the sample data you have shown us uses one or two <space> characters to separate fields; not a <tab> character.
The following will work with input files with fields separated by one or more blanks (where a blank is a <space> or a <tab>). This will not work if you have input files that really do use a <tab> as a field separator and some of your field data contains a <space>. It will work with any number of input files. It will work with any number of fields in a line (as long as all files have the same number of fields). The output header is taken from the 1st line of the 1st input file. The first line of all other input files are ignored. If the name on a data line is the same as the name on the header line in the 1st input file, that data will be merged into the output header line (i.e., it is assumed that the name used in the header in the 1st input file is not used as a name on any non-header line in any of the input files). Output fields will be separated by a <tab> character. The names of the files to be processed are not built into this script, they must be supplied as command-line arguments to the following script:
#!/bin/ksh
awk ' # Use the awk utility to interpret the following script...
BEGIN { # Set output field separator.
OFS = "\t"
}
NR == 1 || FNR > 1 {
# Gather data from the 1st line in the 1st file (the header is supposed
# to be the same in all input files) and from the 2nd line on in every
# input file...
# If we have not seen the name found in the first field before...
if(!($1 in name)) {
# Add the 1st first to the list of known names, increment the
# number of names we have seen, and note the output line number
# where this name should appear in the output....
name[order[++nc] = $1]
# and initialize the data for each output field for this name
# from the corresponding input fields on this line.
for(i = 2; i <= NF; i++)
d[$1, i] = $i
} else # And if we have seen this name before, add data to be output
# for this name to the accumalated data we have seen before for
# this name.
for(i = 2; i <= NF; i++)
d[$1, i] = d[$1, i] "/" $i
}
END { # Now that we have hit EOF on the last input file, print the accumulated
# output. For each name seen...
for(i = 1; i <= nc; i++) {
# Print the name...
printf("%s", order)
# and for the remaining fields...
for(j = 2; j <= NF; j++)
# print the output field separator followed by the
# accumulated data for this name and field number.
printf("%s%s", OFS, d[order, j])
# and after the last field has been printed, add an aoutput
# record separator.
print ""
}
}' "$@" # Terminate the awk script and use the command line arguments as the
# list of files to be processed.
This was written and tested using a Korn shell, but will work with any shell that uses Bourne shell syntax. If you save this script in a file named merger
and make it executable:
chmod +x merger
and execute it with the pathnames of your sample input files:
./merger a.txt b.txt c.txt
it produces the output:
Name 9/1 9/2
X 1/13/25 7/19/31
y 2/14/26 8/20/32
z 3/15/27 9/21/33
a 4/16/28 10/22/34
b 5/17/29 11/23/35
c 6/18/30 12/24/36
Note that the output you said you wanted on the last line of the output was:
c 6/16/30 12/24/36
which, in addition to using <space> as a field separator instead of <tab>, also has 16
as the data from the 2nd column of the last line in b.txt
instead of the value 18
that was contained in that field in your sample input file.