How to use multiple algorithms with Minuit2?

I am trying to move from Minuit to Minuit2, following the example code listed at https://root.cern/doc/master/NumericalMinimization_8C.html. However, it is unclear to me how the ROOT::Math::Minimizer class can be used to call different algorithms after one another. For example, with a TMinuit object, I would want to call a minimization using the Migrad algorithm before then calling another minimization using the Minos algorithm in order to (1) improve the fit and (2) estimate parameter uncertainties. With ROOT::Math::Minimizer, it looks as if you have to create the Minimizer object with a specific algorithm “pre-set,” and then you call the Minimize() function to execute that specific algorithm. How can I use this new minimizer class to call several different algorithms from the same minimizer object? Is this possible?

Thank you in advance.

@moneta will most probably give you some hints

Hi,

First of all if you want to use Minos, which is not a minimisation algorithm, but a method to compute parameter uncertainties, you have a dedicated method in the Minimizer class, Minimizer::GetMinosError
https://root.cern.ch/doc/v614/classROOT_1_1Math_1_1Minimizer.html#af5f4fa9a2b7773e128ecb0c655797f79

Then if you want to call different minimizer algorithm of Minuit2, for example Simplex and then Migrad, it is true you would need to create a new Minimizer object.
If requested we can probably add the functionality without creating a new object, however, a major difference compare to TMinuit, is that creating a new iInimizer object is a small and fast operation. The allocations are created on demand contrary to TMinuit, which allocates all the needed memory in its constructor.

Best Regards

Lorenzo

Hi @moneta, thank you very much for the reply.

Yes, you’re right, sorry, Minos isn’t itself a minimization algorithm. What I meant is that with the old TMinuit implementation, calling Minos very often discovers a new minimum (or several new minima) as the parameter uncertainties are calculated, and I think that Minuit then has some internal logic to call Migrad again to re-minimize and then return back to Minos. When using the old TMinuit, calling Minos after a Migrad call almost always results in an improvement to my fit, and this is the behaviour that I want to replicate with Minuit2.

However, I’ve investigated further and it looks like Minuit2 hits the “better” fit right away, whereas the old TMinuit would need a Migrad call followed by a Minos call to get to the same fit result. Is Minuit2 better equipped to find other local minima, has something changed there relative to TMinuit?

As for your strategy of initializing a new Minimizer object for every algorithm you want to use, are you saying that setting up a new Minimizer with initial variable values equal to the output of the previous Minimizer object is the way to go with this class in general?

Thank you.

Hi,
I have recently implemented in Minuit2 (in the class MInuit2Minimizer) the logic for re-calling Migrad and re-do a minimisation when a new minimum is found. Unfortunately I discovered recently an issue with this (see JIRA ROOT-10854) that will be fixed in the next days.

Yes, this will be the procedure. Note that if you are using the ROOT::Fit::Fitter class, in that case you will be able to switch algorithm easier. The process of re-creating the Minimizer class is handle internally in the Fitter class.

Best regards

Lorenzo

Thank you for this information!