What is the initial step length of Minuit2?

Dear experts

I wonder what is the initial step length of a parameter? When I set the step of a parameter to 1. by fFit->SetVariable(0, "lambda", lambda, 1.);, then I found the first step length is not 1 but roughly 0.01, why?

Hello @Crisps,

Reading a bit of code, I would say that Minuit2 interprets the last arguments as the parameter error. At least the authors refer to that argument with that word.
I also added printouts to the NumericalMinimization tutorial, and can confirm that the first gradient computation happens with 1/100 of the parameter error, but the steps afterwards seem to be what I asked.

To compute the initial gradient, it would be reasonable for Minuit2 to not use the full error as a step size, but to use a fraction of that to “carefully” start the fit. Here is how it’s computing the first gradient where I asked for step size [0.2, 0.2]:

-1 1.2
-0.998 1.2
-1.002 1.2
-0.9998 1.2
-1.0002 1.2
-1 1.202
-1 1.198

Afterwards, when it’s taking the first step, it seems to actually step by 0.2 for the second parameter:

-1.10526 1

Note that the gradient is roughly [2, 1], so the step for the first parameter is only half the step size of the second. After that, the error estimation for the parameters happens internally anyway, so the initial step sizes will be adapted to the behaviour of the function.

Is that enough to understand your case or should we call in a Minuit expert?

Thank you for your explanation, in my case because the initial value is staying near some local minimum so I need the minimizer to search wider to avoid that local minimum, should I enlarge the step or?

I also attach the plot of scanning along the parameter, you can see there is a (not so deep but wide) local minimum near 0

In the case of local minima like the one you posted, I doubt that changing the step size will quickly and reliably get you to the minimum. The gradient will be sampled in the immediate vicinity of 0, and then it would take a step of roughly the size you asked for. But it would then quickly fall into the local minimum.

In this case, it’s probably better to sample the function, and choose good starting values. You could even use something like -6 to make it go down the steep flank on the left.

Yes, choosing a good starting value is always a solution, but in my case the local minimum and global minimum is just random…there is no way to get a even rough initial value so only thing I can do is start from 0 :frowning: what should I do to avoid local minimum in such case?

It’s difficult …
You can let it run into the minimum, and write code that “manually” scans the parameters. If you find a lower value, you restart the minimisation from there.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.