while working with the
TMVA::RReader object I faced the problem of the method
AddVariable only accepting
Float_t variables (or
Int_t), but not
Double_t. This was already brought up quite a long time ago, and I see that no progress has been made so far. While it is true that one can work around this by simply casting the needed variables, this can cause unexpected behaviour due to rounding errors, so I wonder if it is possible to provide handling of
Thanks in advance,
TMVA (and therefore also the Reader and RReader) accepts floating as input data, given the fact that the majority of machine learning algorithms work with single precision and the input data contain uncertainties or noise that are in the majority of the case larger than the floating point precision. If you have a real use case where you need double precision, I would be interested to know about it.
thanks for the answer, I understand your point. The case in which I encountered the problem is evaluating a TMVA classifier on data coming from a TTree which I did not create that already has
Double_t branches, so to use them the only way was casting to
Having the chance to use
Double_t directly in
TMVA::RReader::AddVariable would be very useful, but since casting is a valid workaround I understand if this isn’t top priority.