Hello,

I recently encounter a problem: the error bar of my histogram is too large. I have used a function Sumw2(kTRUE), which sets the error bar to be the squares root of original uncertainty. After this modification, the error bar does seem to be smaller, but I wonder if the error bar is too small or even incorrect. Given the original error term distribution is Poissonian, how I can ensure the current error bar is correct.

Ps: I heard I can do a cross reference check by getting the events in each bin and compare them with the error term, but can anyone be more specific about that method.

Thanks a lot!!!

Hi,

How are you confident that the error is too large or too small ?

Do you know the probability distribution of the observed bin content ? What the bin content represents for you ?

If the bin content represent counts, and the histogram normalisation is not fixed, the distribution is a Poisson and the estimated error in ROOT is the one assuming the true value of the Poisson is the observed bin count.

If you are filling the histogram with weighted counts, then an approximated procedure is used for estimated the uncertainty, and the resulting error computed by ROOT is the square root of the sum of the square of the weights. In older ROOT version you needed to call Sumw2(true) to get this error.

If your histogram is the result of something else (e.g. division of two other histograms) then a correct error estimate is more complicated and probably you would need to use a specialised method such as bootstrapping or Monte Carlo simulation.

Lorenzo

Thanks so much! I will dive into that.

Hi,

I just recheck the context of the histogram I am working on, and the bin content is the probability density/weighted counts. Followed by your instruction, I, therefore, revise my code as â€˜histogrampointer->Sumw2(true)â€™, so based on my understanding, the error bar should be computed correctly in this way?

Sorry for such a late reply!!!

Hi,

Yes it should be correct, but remember the weights are just approximations, probably the best you can do with the information you have.

For a fully correct estimate you would need to know to get the real probability distribution of your observable and weights

Lorenzo

Hi,

Get it, thanks a lot!

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.