Subtract field values

I've got a long logfile of the form

network1:123:45:6789:01:234:56
network2:12:34:556:778:900:12
network3:...

I've got a similar logfile from a week later with different values for each of the fields eg

network1:130:50:6800:10:334:66
network2:18:40:600:800:999:20
network3:...

How do I script it so that for each network, I take the fields away from each other so that I get the output

network1:7:5:11:09:100:10
network2:6:6:44:22:99:8
network3:...

I'm guessing to use Perl and use ":" as the split, but I'm not sure how I'd check the network name against each other in the two log files, and how I'd get it to loop to do all the networks listed in the logfile

Try this:

awk '{
  getline s < "file1"; split(s,a,":")
  n=split($0,b,":")
  out=b[1]
  for(i=2;i<=n;i++) {
    out=out ":" b - a
  }
  print out
}' file2

Seems to work... it gets a bit confused when the network names are not in the same order though. Must check why that is in the logfiles. Should be possible to do a sort on the network name.

Another solution that doesn't care about networks order :

awk '
BEGIN { FS = OFS = ":" }
NR==FNR {
   for (i=2; i<=NF; i++) old[$1,i] = $i;
   next;
}
{
   out = $1;
   for (i=2; i<=NF; i++) out = out OFS ($i-old[$1,i]);
   print out;
}
' net_old.dat net_new.dat

Jean-Pierre.

cat f1 f2 | sort | paste -d: - - | awk -F":" '{
cnt=NF/2
printf $1
for(i=2;i<=cnt;i++){
 printf ":"$(i+cnt)-$i
}
print ""
}'

Another one:

paste <(sort file1) <(sort file2)| awk -F"[: ]" '{ print $1" "($8-$2)" "($9-$3)" "($10-$4)" "($11-$5)" "($12-$6) }'