I’m comparing two TH1 distributions that should be very similar (typically a data/MC plot). In order to check the agreement, I also plot the ratio of the entries of these distributions in each bin, which in the best case the ratio plot would be a uniform distribution y=1. Obviously this is not the case, the agreement can be very different in the various bins, so I apply certain corrections to improve the agreement. Now I want to quantify the improvement after the corrections. I thought about tests like KS, but I was wondering if there is a better approach/function (build-in in Root) to have a quantity showing how good or bad the correction worked compared to the previous ratio plot. Many thanks for your suggestions.
I think @moneta may help you.
If you want to compare data vs MC it is better you perform a statistical test directly of the two histograms, like a chi2 test than doing this on the ratio. The ratio is good for a visual inspection but has the drawback that error estimations on ratio is more complex. The resulting uncertainties deviate for normality and one could not really apply a chi2 test.
A KS would be better used on unbind data directly rather than histogram. You might try eventually using AndersonDarling test, but again directly on data vs MC histograms.
Dear Lorenzo, thanks a lot for your suggestion.
so when I perform the tests using the uncorrected MC, the values are all zero or ridiculously small, and after correction, the numbers are still very small, probably because of the quality of the agreement. But I’m not looking for perfect agreement, just a way to show that there is some improvement after correction.
In the first place, I was calculating the area of the ratio plot that deviates from y=1, before and after correction, and as an approximation, it worked fine (i.e. if the area is smaller after correction, then the agreement is most likely better), however, for bins where the stats are low, the high fluctuations spoil the method.