Hi,
while working with the TMVA::RReader
object I faced the problem of the method AddVariable
only accepting Float_t
variables (or Int_t
), but not Double_t
. This was already brought up quite a long time ago, and I see that no progress has been made so far. While it is true that one can work around this by simply casting the needed variables, this can cause unexpected behaviour due to rounding errors, so I wonder if it is possible to provide handling of Double_t
variables.
Thanks in advance,
Andrea
Hi,
TMVA (and therefore also the Reader and RReader) accepts floating as input data, given the fact that the majority of machine learning algorithms work with single precision and the input data contain uncertainties or noise that are in the majority of the case larger than the floating point precision. If you have a real use case where you need double precision, I would be interested to know about it.
Best regards
Lorenzo
Hi @moneta,
thanks for the answer, I understand your point. The case in which I encountered the problem is evaluating a TMVA classifier on data coming from a TTree which I did not create that already has Double_t
branches, so to use them the only way was casting to Float_t
.
Having the chance to use Double_t
directly in TMVA::RReader::AddVariable
would be very useful, but since casting is a valid workaround I understand if this isn’t top priority.
Thanks again,
Andrea