Dear all,
I filled some histograms with weights. And I used Sumw2() when I created them. Now I want to get the binomial efficiency out of these histograms. But when I use:

specifying the option as “b”, the errors calculated are not right, since it uses the bin content to calculate the binomial errors, which disregard the weights. Does anybody have any suggestions?

if you fill your histogram with weights in order to have correctly errors in case of binomial statistics you would need to know the sum square of the weights for the event which are only in h2 and NOT in h1. This information is not available if you have only h1 and h2.
I would then suggest you to create at the beginning an an histogram h3 with the content of events which are only in h2 and not in h1 (h3 = h2 - h1).
Then you can use error propagation to calculate the errors for the ratio:
h1/h2 = h1/(h1+h3) using the bin errors in h1 and h3.

You will have to do the calculation yourself for each histogram bins, since the function TH1::Divide assume uncorrelated errors in the division.

After second thoughts, you can actually get the right error on the binomial efficiency from TH1D::Divide when the histograms h1 and h2 have weights.

We need to correct the current formula in the TH1::Divide implementation, since that one (as you pointed out) does not use the bin error information. We will update the CVS tomorrow

The current formula for the error is (w = b1/b2 and b1,b2 are the histogram contents)

Could you please give some references for this formula? I would like to understand more about the error calculation. Actually I see some link to F.James 8.5.2, but I failed to find the formula there. Thanks.

You have to write a weighted binomial likelihood and then apply a formula described at the end of F.James 8.5.2 to compute the error. Basically you compute the errors from the second derivative of the weighted log-likelihood and also from the second derivative of the log-likelihood built using the weight square.
If H is the second derivative matrix of the log-likelihood (Hessian), and K is the second derivative of the log-likelihood scaled using weight square, the error matrix E = H^-1 K H^-1.

If you compute this for a binomial likelihood, you should get that formula in TH1::Divide