Why when dividing histograms with reasonable error bars sometimes I get massive error bars in the resulting histogram?

Here are my results, and I just don’t understand why when the error bars in the original histograms are reasonable, the result gets an error bar so unbelievable big but only in the first bin

The contents of the first bin in your second histogram (the denominator) is close to zero and its error is quite big (this then properly propagates to a big fraction’s / quotient’s error).

What would you do in my situation?, because as it stands they will never accept my results like that

Try:

ratio_histogram->SetAxisRange(0., 20., "Y");

Thanks, but they won’t care if I changed the range because the error bars are still so big that that data point is “basically worthless”, could using a TH1F instead of a TH1D help me here?, or using long doubles to calculate the contents of the bins and then put them in ratio_hist?.

Because my only other option is to try to use several million more events (I used 10million to get these ones) and reduce those error bars, but it will take hours…

Well, I assume your errors scale as sqrt(entries) so, if you want to reduce errors by a factor 3 (4, 5, …), you would need 9 (16, 25, …) times more events.

Note that there exists another method which optionally computes binomial errors: TH1::Divide (const TH1 *h1, const TH1 *h2, Double_t c1=1, Double_t c2=1, Option_t *option=“”)

And maybe you should try (it allows several different errors’ treatment methods): TRatioPlot

1 Like

Well but these don’t seem to be subsets of each other - so there’s not much you can do… Talk to your supervisor / physics group, this might be an actual “analysis design” issue more than a ROOT issue.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.