# Iterations and precision for Minuit / Migrad

Hi everyone,

I am trying to fit a TGraph with a simple cosine function. For that, I first define

TF1 *Cosine = new TF1(“Cosine”, “[0]*cos([1]*x + [2])”, 0., tmax);

and then

Cosine->SetParameters(1., alpha, 0.);
gr->Fit(“Cosine”, “R”);

The fit is very good, but I want even more precission. For that, I have looked for a solution but nothing works apparently. For a start, I would like to increase the number of iterations and the precision. One of the things I tried is

gMinuit->SetMaxIterations(10000);

but although the code runs, it does not affect at all to the fit. What can I do?

Thanks.

ROOT Version: 6.12/04

Hi,

What do you mean exactly with do you want more precision ?

Incresing the maximum number of iterations is useful in case you want to stop earlier or the default number is not sufficient for convergence. If your fit converges using, let;'s say 200 function calls, setting a maximum to 10000 will not do anything

Lorenzo

Lorenzo

Hi Lorenzo,

That’s what I thought. I would like to increase the precision of the fitting parameters. For example for some parameters now the uncertainty is of the order of e-10, I would like to get an even smaller uncertainty.

Hi

Do you mean the uncertainty (error reported by Minuit) on the parameters ?
This is the statistical error on the parameters and depends on the number of events.

There is instead a numerical error on the obtained parameter values and this relates to how Minuit works.
To minimise this error it is recommended to have the covariance matrix of the parameters being close to 1 as much as possible. If you have one parameter error which is 1E-10 and another one close to 1, it is better to rescale the first one to have his error close to 1. This can be done by redefining the parameter as 1E10*par in the fitting function

Lorenzo

All the parameters have a very small error. I want this error to be even smaller.

Then you need to have more data !

Now I see you are fitting a TGraph, in that case the statistical error of the parameters has not much meaning, since you don’t have a statistical error in your data points.
In ROOT in this case we normalise your least square function to get a rough estimate of the error.
Having data that will follow better your model function (because you have more statistics) will result in smaller parameter errors.