Dear ROOT experts,
We have found large performance differences when we run our code with ROOT 6.26 and ROOT 6.30. We perform a fit to the Z boson resonance in which we simultaneously fit a signal and a background template using a RooWorkspace. We fit the model into three different ranges of invariant mass:
[60, 120] GeV
[80, 100] GeV
[75, 105] GeV
To do so, we adjust the fit range with the following commands in pyROOT:
self._w.var(self._fitVar).setRange(self._fitRangeMin, self._fitRangeMax)
self._w.var(self._fitVar).setRange('fitRange', self._fitRangeMin, self._fitRangeMax)
resPass = pdfPass.fitTo(self._w.data(hPassName),
ROOT.RooFit.Minimizer("Minuit", "minimize"),
ROOT.RooFit.Optimize(1),
ROOT.RooFit.Save(),
ROOT.RooFit.Range("fitRange"),
ROOT.RooFit.Minos(True),
)
Where pdfPass is the convoluted PDF of our Monte Carlo template with a Gaussian function plus an exponential background model. Recently, we migrated from ROOT 6.26 to ROOT 6.30, but the code didn’t change. Our experience is that the fit converges similarly, but the time spent to run it has increased exponentially in ROOT 6.30.
These are the times that we get running the same fit in two versions of ROOT:
ROOT 6.26: 43 seconds
ROOT 6.30: More than 30 minutes
We are not running any complex model so we expected a fast minimization. After some debugging, we noticed that the responsible for the time increase is the usage of a determined Range. When we remove the restrictions on the Range parameter we recover the performance from ROOT 6.26.
We would appreciate the help of the ROOT experts on this issue.
Thanks you in advance for the help,
Sergio Blanco