Division of int by double

A simple arithmetic example: 1680 / 1.12 = 1500

My C code result is 1499, here is the code:

 
  
 #include <stdio.h>
main(int argc, char *argv[])
{
  int           t       = 1680;
  double        adj     = 1.12;
  int           ires    = t / adj;
  double        fres    = t / adj;

         printf("int result = %i, float res=%.6f\n", ires, fres);
        return(0);
}
 

I compiled and run this code as

 
 cc -o c c.c && ./c
 int result = 1499, float res=1500.000000
 

I tried it on old SCO 5.07, Ubuntu 8.04 (32 bit) and Ubuntu 12 (64 bit) on different h/w, the result is the same 1499 instead of 1500.

Is there any way to produce correct 1500 result?
Thanks in advance.

If you print it to the maximum precision a double really gives you, you will see that it's not really 1500 -- it's 1499.999999999999773. The .999999999999773 is truncated down to 0 since floating point numbers "round down" when you convert to int.

Just add 0.5 to the number before converting to integer to make it round the way you expect.

int           ires    = (t / adj)+0.5f;
1 Like

To expand a little on what Corona68 said... Although 1.12 is a nice terminating decimal value, the corresponding binary number is a non-terminating sequence (just like 1/3 in decimal arithmetic is a non-terminating .33333...).

For IEEE Std 754 double precision floating point, you get 15-17 significant digits for individual values. As you perform multiplications and divisions, the founding errors add up. If you add or subtract small numbers to or from large numbers, the small numbers may disappear completely.

When you assign a double precision floating point value to an integer object, the result is truncated, not rounded; but you can do the rounding on your own. To see what happened, try this:

#include <stdio.h>
int main(int argc, char *argv[])
{
	int	t	= 1680;
	double	adj	= 1.12;
	int	ires	= t / adj;
	double	fres	= t / adj;
	int	rires	= t / adj + .5;

	printf("adj =\t%.20f\n", adj);
	printf("fres =\t%.20f\n", fres);
	printf("ires =\t%i\n", ires);
	printf("rires =\t%i\n", rires);
        return(0);
}

which produces:

adj =	1.12000000000000010658
fres =	1499.99999999999977262632
ires =	1499
rires =	1500
2 Likes

Thank you for your in-depth explanation.

I tried the same in Java, and to my surprise it behaves differently:

class n1
{
 public static void main(String[] args)
 {
  int i = 1680;
  float d = 1.12f;
  int ires = (int)(i / d);
  float dres = i / d;
 System.out.format("ires = %d, dres = %.15f\n", ires, dres);
 }
}

I run it like this

 
 javac n1.java
 java n1
 ires = 1500, dres = 1500.000000000000000
 

Just a note - I could not say int ires = i / d; as Java complains about loss of precision, so I needed to be explicit here.

I don't know a lot about Java, I hope somebody would explain the different behavior here.

If you take the C program I gave you before and change all of the "double"s to "float"s, you'll get the output:

adj =	1.12000000476837158203
fres =	1500.00000000000000000000
ires =	1500
rires =	1500

instead of the earlier results shown when using type double.

Using double made the difference between the binary floating point representation of 1.12 and the exact decimal value affect the results. With different values, both float and double could give you results just above, just below, or an exact match to the the decimal evaluation (and when the float and double values are inexact, one could be above and the other could be below the exact value). Typecasting floating point value using (int) in both C and Java truncates. In C you can round using:
i = f + .5;
In Java you can round using:

i = Math.round(f);
      or
i - (int)(f +.5);

It has been a while since I've written any Java code (and I haven't tested the following), but did you try something like:

class n1
{
 public static void main(String[] args)
 {
  int i = 1680;
  double d = 1.12d;
  int ires = (int)(i / d);
  double dres = i / d;
  System.out.printf("ires = %d, d = %.20f, dres = %.20f\n", ires, d, dres);
 }
}

Note that using .printf() and using .format() in Java and using printf() in C round values; they don't truncate like a conversion from a floating point type to an integral type.

Thank you for the explanation.

I did try your suggestion to experiment with float, but my results still show ires = 1499

See below:

 #include <stdio.h>
main(int argc, char *argv[])
{
  int   t       = 1680;
  float adj     = 1.12;
  int   ires    = (int)(t / adj);
  float fres    = t / adj;
  int   rires   = t / adj + 0.5;
 
        printf("adj =\t%.20f\n", adj);
        printf("fres =\t%.20f\n", fres);
        printf("ires =\t%i\n", ires);
        printf("rires =\t%i\n", rires);
         return(0);
}

 cc -o c c.c && ./c
adj =   1.12000000476837158000
fres =  1500.00000000000000000000
ires =  1499
rires = 1500
 

Tried this on three systems I mentioned above.

Just out of curiosity - why C does not have decimal(n,m) fixed-point data type? I think if I had a fixed-point available here I won't see the problem I am having.

With the code:

#include <stdio.h>
int main(int argc, char *argv[])
{
	int	t	= 1680;
	float	adj	= 1.12;
	int	ires	= (int)(t / adj);
	float	fres	= t / adj;
	int	rires	= (int)(t / adj + .5);

	printf("adj =\t%.20f\n", adj);
	printf("fres =\t%.20f\n", fres);
	printf("ires =\t%i\n", ires);
	printf("rires =\t%i\n", rires);
        return(0);
}

with or without the text shown in red, on Mac OS X 10.9.4, I get the output:

adj =	1.12000000476837158203
fres =	1500.00000000000000000000
ires =	1500
rires =	1500

which seems to match what you showed us from your Java code.

It is interesting that you're getting the line:

adj =   1.12000000476837158000

while I'm getting:

adj =	1.12000000476837158203

But, as I said before, as long as you're performing floating point arithmetic (float, double, or long double) where one of the operands or the result of a division does not have an exact representation may produce a result that is slightly higher or slightly lower than the exact decimal calculation would yield. Any time you are converting a floating point value to integer you need to take care to make adjustments suitable for the results you want given the limitations of floating point arithmetic. (Note that depending on what values you're expecting and the operations that have been performed, adding .5 before converting to int might or might not be appropriate.)

I would assume that C doesn't include a decimal data type because the hardware on which C was initially developed didn't include instructions to process binary coded decimal data (and a lot of hardware today doesn't either).

Have you looked for an open-source version of the dc utility source to see how it handles arbitrary precision arithmetic?

I don't have any experience with it, but have you looked at XBCD_Math?

The

dc

code is good insight, thanks.

I wonder if the Apple compiler is optimizing basic floating point operations into some SIMD/3dnow/etc instruction instead of using the old-fashioned x86 FPU. (There's some reason to avoid the FPU, these days it frees up some registers.) These instruction sets often sacrifice accuracy for speed.