Good afternoon,
I have two questions concerning the loss functions used for the Gradient Boosted Decision Tree TMVA method.
-
What is the default loss function used?
This post here pretty clearly states it is cross-entropy. However, page 68 of the TMVA User guide states the default for all TMVA implimentations of GradientBoost is the binomial log-likelihood loss. Thank you in advance for the clarification! -
Is it possible to define a custom loss function for use in BDTG training? this presentation suggests on slide 20 that it is possible for regression problems. If so, pointing me towards a practical example of how to do so would be greatly appreciated. For context, i require the use of a modified cross-entropy loss function that would still be differentiable.
Thanks again,
Matt