I’m not sure if someone already reported this but after some searching I couldn’t find anything like it.
I noticed something strange with the CINT interpreter (this doesn’t happen if I compile the code).
I wanted to truncate a result of a calculation of doubles into an int. Consider the snippet as an example:
Double_t LogTanMin = -4.; Double_t LogTanW = 0.01; Double_t LogTan = -3.900; Int_t x = (Int_t) ((LogTan-LogTanMin)/LogTanW)
The output of the previous code is:
which is wrong (should be 1).
Moreover, if I assign the result to a Double_t and then cast I keep getting a wrong result, but if instead I cast it to Float_t and then cast it to Int_t then it comes right.
ouble_t LogTanMin = -4.; Double_t LogTanW = 0.01; Double_t LogTan = -3.900; Float_t xtempf = ((LogTan-LogTanMin)/LogTanW) Int_t x = (Int_t) xtempf
the result comes out correct:
It only happens in the interpreter (ROOT 5.22/00) and probably it is a known feature. I could have missed it if it wasn’t for the fact that I started getting the wrong samples from an histogram.
Thanks in advance,