Dear ROOT experts, I recently used uproot to read out one saved root histogram and then try to fit the data points by using python scipy.optimize curve_fit function. It turned out that the result I got was quite different from the result I fit the same histogram by using pyROOT fitting function. The final FWHM width of python fit is 110um and root fit is 120um. By visual inspection, python fit does a better job than root fit. However, the chi2/ndf of python fit is larger than root fit.

I don’t really know what could be the difference. After all, its a different minimiser, so it may behave differently.
If chi2 is bad, and FWHM seems to be off, the fit function is maybe not optimal.

Hi Stephan,
Thank you for your comment. Since you are mentioning the chi2, could you let me know how ROOT calculates the chi2? In my case, I fitted one 1D histogram which the variance of each bin is equal to the bin contents. Is my python chi2/ndf calculation correct in my code? Looks like I always get a little bit worse chi2/ndf than ROOT does.
Meanwhile, if you look at the comparison plot in cell 9, the fitting curve got from the ROOT fit(black dash line) deviate the datapoints quite a lot. Therefore, it will be great if you have any ideas about how to determine which fitting is better in this specific case.

Hi Stephan, thanks a lot. Indeed, your document link is very helpful and I was able to figure out the discrepancy between root fit and python fit now. In fact, root fit default uses mode “I” [Use integral of function in bin, normalized by the bin volume, instead of value at bin center] and python fit uses mode “W” [Ignore the bin uncertainties when fitting using the default least square (chi2) method but skip empty bins]. I got the same fitting result now by switching root fit option to “RQW”.