If you print it to the maximum precision a double really gives you, you will see that it's not really 1500 -- it's 1499.999999999999773. The .999999999999773 is truncated down to 0 since floating point numbers "round down" when you convert to int.
Just add 0.5 to the number before converting to integer to make it round the way you expect.
To expand a little on what Corona68 said... Although 1.12 is a nice terminating decimal value, the corresponding binary number is a non-terminating sequence (just like 1/3 in decimal arithmetic is a non-terminating .33333...).
For IEEE Std 754 double precision floating point, you get 15-17 significant digits for individual values. As you perform multiplications and divisions, the founding errors add up. If you add or subtract small numbers to or from large numbers, the small numbers may disappear completely.
When you assign a double precision floating point value to an integer object, the result is truncated, not rounded; but you can do the rounding on your own. To see what happened, try this:
#include <stdio.h>
int main(int argc, char *argv[])
{
int t = 1680;
double adj = 1.12;
int ires = t / adj;
double fres = t / adj;
int rires = t / adj + .5;
printf("adj =\t%.20f\n", adj);
printf("fres =\t%.20f\n", fres);
printf("ires =\t%i\n", ires);
printf("rires =\t%i\n", rires);
return(0);
}
instead of the earlier results shown when using type double.
Using double made the difference between the binary floating point representation of 1.12 and the exact decimal value affect the results. With different values, both float and double could give you results just above, just below, or an exact match to the the decimal evaluation (and when the float and double values are inexact, one could be above and the other could be below the exact value). Typecasting floating point value using (int) in both C and Java truncates. In C you can round using:
i = f + .5;
In Java you can round using:
i = Math.round(f);
or
i - (int)(f +.5);
It has been a while since I've written any Java code (and I haven't tested the following), but did you try something like:
class n1
{
public static void main(String[] args)
{
int i = 1680;
double d = 1.12d;
int ires = (int)(i / d);
double dres = i / d;
System.out.printf("ires = %d, d = %.20f, dres = %.20f\n", ires, d, dres);
}
}
Note that using .printf() and using .format() in Java and using printf() in C round values; they don't truncate like a conversion from a floating point type to an integral type.
I did try your suggestion to experiment with float, but my results still show ires = 1499
See below:
#include <stdio.h>
main(int argc, char *argv[])
{
int t = 1680;
float adj = 1.12;
int ires = (int)(t / adj);
float fres = t / adj;
int rires = t / adj + 0.5;
printf("adj =\t%.20f\n", adj);
printf("fres =\t%.20f\n", fres);
printf("ires =\t%i\n", ires);
printf("rires =\t%i\n", rires);
return(0);
}
cc -o c c.c && ./c
adj = 1.12000000476837158000
fres = 1500.00000000000000000000
ires = 1499
rires = 1500
Tried this on three systems I mentioned above.
Just out of curiosity - why C does not have decimal(n,m) fixed-point data type? I think if I had a fixed-point available here I won't see the problem I am having.
which seems to match what you showed us from your Java code.
It is interesting that you're getting the line:
adj = 1.12000000476837158000
while I'm getting:
adj = 1.12000000476837158203
But, as I said before, as long as you're performing floating point arithmetic (float, double, or long double) where one of the operands or the result of a division does not have an exact representation may produce a result that is slightly higher or slightly lower than the exact decimal calculation would yield. Any time you are converting a floating point value to integer you need to take care to make adjustments suitable for the results you want given the limitations of floating point arithmetic. (Note that depending on what values you're expecting and the operations that have been performed, adding .5 before converting to int might or might not be appropriate.)
I would assume that C doesn't include a decimal data type because the hardware on which C was initially developed didn't include instructions to process binary coded decimal data (and a lot of hardware today doesn't either).
Have you looked for an open-source version of the dc utility source to see how it handles arbitrary precision arithmetic?
I don't have any experience with it, but have you looked at XBCD_Math?
I wonder if the Apple compiler is optimizing basic floating point operations into some SIMD/3dnow/etc instruction instead of using the old-fashioned x86 FPU. (There's some reason to avoid the FPU, these days it frees up some registers.) These instruction sets often sacrifice accuracy for speed.