Sum of a column in multiple files

I am performing the following operation on a file that looks like this

1000 0 10 479.0 1115478.07497 0.0 0.0 0.0872665
1000 10 20 1500.0 3470012.29304 0.0 0.0 0.261799
1000 20 30 2442.0 5676346.87758 0.0 0.0 0.436332
1000 30 40 3378.0 7737905.30957 0.0 0.0 0.610865
1000 40 50 4131.0 9315890.45893 0.0 0.0 0.785398
1000 50 60 4698.0 10500297.8359 0.0 0.0 0.959931
1000 60 70 5195.0 11546480.8434 0.0 0.0 1.13446
1000 70 80 5515.0 12425333.9733 0.0 0.0 1.309
1000 80 90 5709.0 13188131.9754 0.0 0.0 1.48353
1000 90 100 5709.0 13188131.9754 0.0 0.0 1.65806

If I do

awk '{sum+=(($4/74920)/0.174533)/(sin($8))} END  {print sum}' 1000.dat

I get a single value result for this file. Problem is that I would like to perform this operation on several files and get the output as a single column array. Does anyone have an idea about how to do for up to 200 files with awk.

If you just want a list of the sums, try:

awk 'FNR == 1 && NR > 1 {
        print sum
        sum = 0
}       
        {sum+=(($4/74920)/0.174533)/(sin($8))}
END     {print sum}' *.dat

If you want the filename printed before the sum for each file, try:

awk 'FNR == 1 && NR > 1 {
        print sum
        sum = 0
}
FNR == 1 {
        printf("%s:", FILENAME)
}
        {sum+=(($4/74920)/0.174533)/(sin($8))}
END     {print sum}' *.dat

Or maybe:

$ cat temp.sh
for file in *.dat; do
  awk '{sum+=(($4/74920)/0.174533)/(sin($8))} END {print sum}' $file
done

If too many files for *.dat, there are ways to deal with that.