Recently, for some reason I studied the performance of both chi-square fit method and LogLikelihood fit method implemented in ROOT. And I found the result of chi-square method is quite beyond my expectation.
I did a simple test to evaluate the performance of the two methods:
- Randomly generate a histogram according to a pre-defined function fcn (I tested gaus and pol2), using TH1::FillRandom(“fcn”, Nentries);
- Fit the histogram with the same function, using TH1::Fit(“fcn”) and TH1::Fit(“fcn”, “L”) respectively;
- Calculate the number of entries from fitted parameters: number of entries = definite integral of fcn from xmin to xmax / bin width. And then calculate bias = (Fitted number - Generated number)/Generated number.
- Set different seeds and repeat the steps above, to get the distribution of bias of the two methods.
- Fix the generated number of entries and decrease the bin width, and repeat all the steps above.
When the event number of each bin is sufficiently large, both methods appear to work equally well. However with the bin width decreasing, the bias of the chi-square method has a obvious deviation from zero, while the bias of Log Likelihood is still sharply zero. And the RMS of chi-square is remarkably large than that of Log Likelihood.
As I formerly expected, there should not be such remarkable difference between the two methods, since they are both mathematically precise. So my question is: Is this really because of the mathematical difference of the two methods, the specific treatment during the implementation in ROOT, or some other aspects that I overlooked?
Thank you all in advance!