Hi all,
Yesterday I found something strange, it maybe because I made some mistakes in my code but I couldn’t figure it out (at least, until now).
I filled my histograms with sumw2 (SetDefaultSumw2(1)), then I calculate the error by two methods:
1/ IntegralAndError
2/ Calculate the sum of GetBinError of all bins.
However, I got different results.
Here is my code:
Int_t nbins = totalBG->GetSize();
Double_t bgerror;
Float_t total = totalBG->IntegralAndError(1,nbins-2,bgerror,"");
Double_t bgerror1 = 0;
for (Int_t i=1;i<nbins-1;i++) {
bgerror1 += histos[0]->GetBinError(i);}
Any help would be welcome, thank you guys so much.
In the “for” loop:
bgerror1 += (histos[0]->GetBinError(i)) * (histos[0]->GetBinError(i));
Afterwards:
bgerror1 = TMath::Sqrt(bgerror1);
See http://root.cern.ch/root/html/src/TH1.cxx.html#UdxrFB and http://root.cern.ch/root/html/src/TH1.cxx.html#xQJNQB
thank you for your quick reply, I’ll try it
Suppose I want to calculate something like the error of number of signal over number of signal, i.e calculate the error of the integral over the integral of the histogram, am I doing it right?
I’m trying to calculate the error of the significance here.
float significance = sqrt(2 * ((Signal + Background) * log(1 + (Signal/Background)) - Signal));
float error = significance * sqrt(pow((errorSignal/Signal),2) + pow((errorBackground/Background),2));