I would like some clarification on an evaluation of the efficiency using TGraphAsymmErrors in presence of histograms with negative weights. I had look at old posts about it but at the end it is not clear to me yet.
Please, consider the situation:
If I have a histogram denominator with a bin having the entries and respective weights (1,-1), (1, +1) and (1,+1) and the histogram denominator (1,+1). In this case, the bin error is given by the sumw2(), right? the Method BayesDivide can be used for the TGraphAsymmErrors once the weights are integer even having negative weights? And using this method, the error bars are correctly calculated due to the presence of negative weights or another procedure should be taken instead? In other words, how is the better way to handle with the uncertainties on the ratio in this case with negative weights?
I think @moneta can help you.
I would not be very confident in using the BayesDivide method for negative weights, which are a special case. I think one should verify, maybe with MC studies that the obtained uncertainties are correct for this case.
I would switch to the normal approximation in this case, which gives as you mention an histogram bin error equal to sumw2()
by ‘‘switch to the normal approximation’’ you mean you use in this case the Divide Method? If so, the options available are these ones: cl=x and b(a,b)?
And just to confirm, in case of negative weights, the bin error is given by the sqrt(sum of the weights squared)?
Thanks a lot,
Thank you for the that, @couet!
The bin error of an histogram filled with positive and negative weights is computed as sqrt(sum of the weights squared).
For the efficiency, I would use TGraphAsymmErrors(h1,h2,“cl=0.68 n”), but this is also equivalent to
thank you very much for the help!
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.