Casting to int - something is strange!

Hello!

I have found something strange when trying to calculate bin limits for a histogram. To calculate the number of bins between to limits I took the difference, divided it by the stepsize I have in my input and converted it to an integer since this is what the histogram wants in the declaration. However, I don’t agree with the output I get. As a double I get 19 from the example below, casting to an integer changes it to 18. Could someone explain this?

#include <iostream>
#include <iomanip>

void test(){

  double sMin=0.1;
  double sMax=2.0;
  double step=0.1;

  cout<<"sMax before = "<< sMax<<endl;
  cout<<"sMin before = "<< sMin<<endl;
  cout<<"step before = "<< step<<endl;

  cout<<"(int)((sMax-sMin)/step) = "<< (int) ((sMax-sMin)/step)<<endl;

  cout<<"((sMax-sMin)/step) = "<<((sMax-sMin)/step)<<endl;;

  cout<<"((sMax-sMin)/step) (WITH PRECESION 9) =" <<setprecision(9)<< ((sMax-sMin)/step)<<endl;

  cout<<"sMax after = "<< sMax<<endl;
  cout<<"sMin after = "<< sMin<<endl;
  cout<<"step after = "<< step<<endl;

}

Note that the result is the same whether you use CINT or if you compile the code.

This is an issue of expected numerical exrror.

For example in your case (sMax-sMin) can be represented as

hence (sMax-sMin)/step:

And the difference you notice is due to the different rounding rule whether you convert to an int (which keep only the integral part) or whether you print via ostream (which takes in consideration the fact that 1.8999999 ‘means’ 1.9)

You may want to re-read carefully you favorite book on how to deal with floating point arithmetic.

Cheers,
Philippe.