Changes in default Minuit2 behavior between root 5.28 / 5.30

Hi, all,

I’m using Minuit2 minimizer (ROOT::Minuit2::Minuit2Minimizer) in the statistics framework theta,

The main usage is the minimization of the negative log-likelihood function.

I observed “strange” behavior (=physically nonsense) if using root 5.30, whereas results with earliers versions (5.27/5.28) where Ok. It seems to be connected to the default value of the “precision” parameter which was 1e-6 in the older versions of root and is now 1e-2. Also, from the release notes, it seems that this is rescaled internally using a different factor now.

Which is the value for the precision parameter to use if I want the same behaviour as in root 5.28? So far, I did not find any value which works reasonably well under as many circumstances as the value of 1e-6 did before.



There were no changes in precision, as far as I remember between 5.28 to 5.30. There were some changes in some factor applied to the tolerance , but this should be minor and some bug fixes. I would need to reproduce your minimization to understand theproblems. It might be useful to see the verbose log output of the minimization in the two cases,
Could you please send those logs ?



Dear Lorenzo,

thanks for your answer. I think I was not very specific. What seems to have changed is actually the “Tolerance” parameter, not the “Precision” parameter. To reprocude, use the following program:

#include <iostream>
#include "Minuit2/Minuit2Minimizer.h"

using namespace std;

int main(){
   ROOT::Minuit2::Minuit2Minimizer min;
   cout << min.Tolerance() << endl;

I compiled it twice: one time using CMSSW_4_2_9 (which uses root 5.27/06b), one time using CMSSW500 (which uses root 5.30/02).

I also executed it using root 5.28/00a (an installation from root svn on my desktop).

The root 5.27 version yields 1e-6, 5.28 gives 0.001, and 5.30/02 gives me 0.01.




Yes, the default has been changed to 0.01 in 5.30, because the previous value was found to be too small for the majority of cases. The tolerance is inside rescaled by a factor = 0.002, so one has in reality a tolerance of 2E-5. This is normally sufficient in the majority of cases. For fitting the scale is typically given by the sigma, 1. A tolerance of order 10^-5, guarantees that the error in the location of the minimum is much smaller than the statistical error in the parameters.
However, if you feel this value is too big for theta, you can easily change the default by doing before the minimization:

ROOT::Math::MinimizerOptions::SetDefaultTolerance( newValue)

By setting to newValue=1.E-7 should be consistent with what you had in 5.28. There the default was 1.E-6, and the rescaling factor was 2.E-4 instead of 2E-3.

Are you sure is this what causing the difference ? Did you notice that your function is converging to a different minimum with the larger tolerance ?
I would be interested to understand these problem you are having to study them in more detail,