I’m minimizing a logPureGaussian likelihood using MINUIT.
The likelihood has the form of: -(x_i-x)^2/(2sigma^2) - log(2pi)/2 - log(2).
I retrieve the covariance matrix from the MINUIT’s fit using “mnemat”, and it describes correctly the correlations expected, but it seems that a factor 4 is missing in the matrix (a factor 2 in the variances).
What interval of confidence is actually included in the covariance matrix? Do the likelihood definition influence it?
Thanks a lot,
Since you are dealing with a log-likelihood, you need either to multiply the likelihood but a factor of 2, or just tell Minuit to do it by calling the command “SET ERR” with the value= 0.5.
int ierr = 0;
arglist = 0.5;
Thanks for your reply!
Using SET ERR=0.5 do not change my results. Using SET ERR=1 instead give me a different output.
So I suppose that Minuit already knew that I was using a log likelihood and set SET ERR=0.5 by default.
Is this possible?
I’m computing how many times the true parameter value is inside the error I obtain from the covariance matrix.
It seems that the true value is included by the uncertanty 66% of the time when I fit a distribution with <1000 point, but ir drops to 25% when I have a distribution with more statistic (up to 25000 points).
Is this expected?
Should not the coverage of the covariance matrix be fix independently of the statistic in the distribution?
Thanks a lot,
p.s This is part of my code:
Int_t ierflg = 0;
TFitResultPtr fit_res = h_xy->Fit(“Gauss2D_fit”,“SL”);
gMinuit->mnemat( mat.GetMatrixArray(), 3);
Yes, if you are doing a likelihood fit via TH1::Fit, the error definition is set correctly to 0.5 automatically. No need to set it.
You can also get the convasriance matrix directly from the TFitResult, by calling
TMatrixDSym mat = fit_res->GetCovarianceMatrix();
Concerning the covariance, if you increase the statistics it should get better, because the covariance matrix estimation is an estimation valid in the asymptotic case, when N (the size of the data set) becomes large.
However, a value of N like 1000 should already give very reasonable coverage.
So, what you are getting is very strange.
Is your histogram maybe filled with weighted events ?
In case, I would need to have your running code to understand more this
I just attached my code. It runs with  (I also added comments to make it readable).
My histograms are not filled with weights.
As you will notice I call the cov. matrix using the 2 methods in . They give the same result except in the case described in question 3).
My questions are:
- Why generating 10k points the coverage is 42% and generating 1k points is 66%?
- Imagine I use a customized likelihood that IS NOT LOG. But I impose SET ERR=0.5 anyway.
Would this give me a covariance matrix smaller of a factor 2? (this is an important question to me)
- Why if I set (wrongly) SET ERR to 1 for a log-likelihood fit, only one of the 2 covariances matrix in  is affected? It seems the other keep using the correct value of 0.5.
Thanks a lot,
TestCovMatr_Simple.C (6.01 KB)
mkdir -p Plots_logL_10k; root -b -q .x TestCovMatr_Simple.C+;
“TMatrixDSym mat = fit_res->GetCovarianceMatrix()”
and “gMinuit->mnemat( mat.GetMatrixArray(), 3)” (I actually use this one later in the code)
The problem you are having is that your fit is bias. When you studying the obtained fit parameters with toys you should plot the pull distributions ( fitted_parameters - true_parameters)/ parameter_error ).
You will see that the mean of the pull is one, therefore your coverage test will not get the right result.
The reason of the bias is that you are doing a binned fit with very large bins. If you just increase the bin number to 100 or maybe more, (or you should maybe try to reduce also the range), you will get a mush better result.
An unbinned fit will be the ideal case.
Concerning the covariance matrix (question (3) ), if you call SET ERR after firing will not have any effect on
the matrix stored in TFitResult.
You should call in this case (before fitting)
This will work if you set values different than 1 which is considered the default and then in this case a value of 0.5 will be used for the likelihood fits.
Thanks a lot Lorenzo,
you have been very clear!
One point I want to make: if I use “SET ERR=0.5” before to make the fit the code crashes. So I placed after, but doing this I have to remember that the 2 definitions of covariance matrixes could be not synchronized!
If I use “ROOT::Math::MInimizerOptions::SetDefaultErrorDef(1.+1.E-15)” before the Fit then the 2 cov matrices will be always syncronized.
A) Do SetDefaultErrorDef only take 2 kinds of numbers? Basically 1 means likelihood and everything that is not 1 will give errors for LogL. There are no other options right?
B) If I increase the histogram to be fitted to have 100 bins in X and 100 in Y, and I change the range from -5 to 5, I still see a low coverage and a pull distribution that is centered in 1.2. If I understood from you correctly this should had mitigate the problem, but it seems worse. Maybe 10k events are not enough? Or I mis-understood?
C) How could I do a unbinned fit? It is possible to do it starting from a binned histogram? I guess not.
Should I use roofit or there is an easy way also with this simple code I shared with you?
Maybe, what is happening with my analysis, where I believe that errors are underestimated of a factor 2, is that I’m computing error using HESSE, while MINOS could offer a much better description when statistic is high. Maybe looking at the likelihood from the fit could help in understanding the problem. Do you know if there is a way to return the (log)likelihoog shape? Unfortunately in my analysis it will be 8-dim, but maybe I could project it and plot it in 2D.
- it will crashes because you cannot use gMinuit before fitting. This is the old interface and it is recommended to use ROOT::Math::MInimizerOptions::SetDefaultErrorDef
A) SetDefaultErrorDef will take any value. But normally one should use the default and not change it.
B) You are right. I modified something else in your macro. You need to increase also the binning of the TF2 class, because that is used in GetRandom2.
You should there set at least 100 bins (default is 30)
In this simple example you can just generate 2 gaussian number for x and y using gRandom->Gaus.
In a general case I would reccomend to use either Unuran or Foam for multi-dimensional functions.
C) For unbinned fit you need to store first all your data points and you need to have your function normalized in case the fit is not extended. Here is an example
github.com/root-mirror/root/blo … t.cxx#L697
But if you are fitting 8 dimension, I think you don;t have other choices, otherwise you would need an incredible large number of bins.
D) MINOS helps in case of non-parabolic likelihood. This is not the case in the example provided (large statistics). Anyway you can use the Scan function of Minuit to look at the log-likelihood shape.
thanks again! I promise this will be my last reply;)
I did exactly as you said and everything worked as expected.
Except that when drawing the likelihood, one parameter shows no minimum even if the fit converges.
Both the Normalization and sigma_y show a minimum as expected. But sigma_x no. Do you know why?
-) When we plot one parameter, what happen to the others? Are they fixed to the minumum value?
-) There is a way to show the likelihood in 2D or 3D, instead of in 1D. For instance in the case I’m interested into the correlation of 2 parameters.
Thanks a lot,
-----gMinuit->Command(“SCAn 0 50 3 10”);
-----TGraph gro = (TGraph)gMinuit->GetPlot(); gro->SetMarkerStyle(29); gro->SetMarkerSize(0.9); gro->Draw(“APL”); c1->SaveAs(name_gr.Data());
-----gMinuit->Command(“SCAn 1 50 1 2”);
-----TGraph gr1 = (TGraph)gMinuit->GetPlot(); gr1->SetMarkerStyle(29); gr1->SetMarkerSize(0.9); gr1->Draw(“APL”); c1->SaveAs(name_gr.Data());
-----gMinuit->Command(“SCAn 2 50 1 2”);
-----TGraph gr2 = (TGraph)gMinuit->GetPlot(); gr2->SetMarkerStyle(29); gr2->SetMarkerSize(0.9); gr2->Draw(“APL”); c1->SaveAs(name_gr.Data());
Like_SigmaY_0.pdf (16.5 KB)
Like_SigmaX_0.pdf (16.5 KB)
TestCovMatr_Simple.C (7.64 KB)
- when plotting one parameter the others are kept fixed when using SCAN.
- you can use the CONTOUR function to plot the 2D likelihood contours. There is no common to do it in 3D.
The problem you are having, by looking at the code, is that in the old Minuit interface the parameter numbers starts from 1. So your plot of sigma_x is in reality a scan of your “Norm” parameter
Thanks a lot. Now everything is much more clear!