Problems with errors for residHist


I ran into a problem when trying to compute a statistical error in a residual histogram.

What I have is a data histogram that is fitted with two signal functions and a background function, from which I create a residual histogram (Minus the background) using:

RooHist *hresid = frameFitA->residHist("data", "Bkg");

I hope to extract the signal from this histogram by counting the bins, and a statistical error for this value. For this, I run the following loop:

for(Int_t k=1;k<hresid->GetN();k++){
      if((hresid->GetPointX(k)>=left_integral_limit) && (hresid->GetPointX(k)<=right_integral_limit)){
        dhData.get(k); // Select bin for "weight()" in next line
        jpsibincounting_bkg+=dhData.weight() - hresid->GetPointY(k); // Calculate background for signal to noise ratio
... // Down here another if statement that mirrors the first, but for second signal

This loop is supposed to run over the whole histogram, and then select the bins which match the integration interval for the signal peak. It should then increment jpsibincounting by the residual entries in this bin (hresid->GetPointY(k);). Up to this point, everything seems to go well, as I get results that match my expectations.

The next line jpsibincounting_error+=std::pow(hresid->GetErrorY(k),2); is supposed do add all the statistical errors of each residual bin together quadratically (I take the root after the loop with jpsibincounting_error = std::pow(jpsibincounting_error, 0.5);).

The problem is that no matter which background function or signal function I use for the fit before creating the residual histogram, the statistical error I get from this method is always the same. This is highly unexpected, since the values of the entries themselves deviate (comparatively) heavily from each other. To illustrate the problem, here are some values with errors for different fitting methods:

11939   pm 146.034       TemplateFit-Signals Pol2-Bkg 
11979.5 pm 146.034       TemplateFit-Signals Exp-Bkg
12267.7 pm 146.034       CBFit-Signals       Pol2-Bkg
12396.8 pm 146.034       CBFit-Signals       Exp-Bkg

This is definitely no coincidence, as this problem replicates many times on other datasets and methods.

I would therefore like to know if this is the right way to compute a statistical error for such a bincounting integral over a residHist, or what method I should use to get the error from the residHist, so it varies for different signal- and background-functions like the values themselves do.

I already tried avoiding the double variable usage in jpsibincounting_error = std::pow(jpsibincounting_error, 0.5);, and using GetErrorXHigh, GetErrorXLow, GetEX and GetErrorY, but to no avail (As I implemented them at least…).

I would be thankful for any help.

All the best, thanks

ROOT Version: ROOT 6.24/06
Platform: Manjaro 21.3.7 Ruah
Compiler: c++ (GCC) 11.2.0


@moneta or @jonas could you please answer? Thank you.

1 Like

Hi @Lucas_trever,

thanks for all the detail in the question!

There is nothing unexpected in your results. If you have data with statistical uncertainty and subtract a model PDF that has no uncertainties, the residuals will have the same uncertainty as the original data independent of the model, which is exactly what you see.

So my next question is what do you actually want to achieve mathematically? If you let me know what you want to calculate, I can certainly help you to implement this in RooFit!


1 Like

Hi @jonas ,

Thanks for the help.

What you say about the same uncertainties does make sense to me. I think I was originally hoping that the fitted model itself would have some kind of error at every point, which would be propagated from the errors of the fit parameters (i. e. from the covariance matrix I believe). So I thought this way the errors of the model at each point hresid->GetPointY(k) would get propagated to hresid automatically (perhaps as a quadratic sum of the fit uncertainties and the statistical uncertainties from the data) when generating RooHist *hresid = frameFitA->residHist("data", "Bkg"). So this might be what I am looking for: Some error that takes into account both the uncertainty of the fitted model, and the uncertainty of the data (If that makes sense mathematically…).

Sadly I am just two days away from the deadline now, so I don’t know if it is still possible for me to implement this in full. Though if you think there is a simple way ROOT allows me to do this, I would happily like to try. Alternatively, here are my thoughts if the former plan seems too time consuming:

If there is some other way to calculate an uncertainty which you think would suit this problem (I am not sure if I am overlooking some obvious solution, as I am not that firm with statistics yet), I would also be open to that. I mean in the worst case, your explanation for this is perfectly sound, and if that is okay I would like to use that. Though my supervisor seemed to heavily dislike the fact that all my stat. errors are equal, and I was thinking that maybe I could instead give some kind of standard error - Though following what you explained, if I got it right, my best guess of the uncertainty are the errors I already have, so anything along the lines of e. g. “Error = sigma/sqrt(N)” I could apply here would not make sense/would give me a worse estimate of the error than what I already have?

I hope I did not project too much of my thought process into this - So long story short, maybe we can try to take the uncertainty of the fitted curve into account, and if thats not sensible, I would assume the equal errors to be my final result?

All the best, thanks again

Okay I see!

In that case, the residualHist is not flexible enough. It’s meant for making the standard plots, if you want to do error propagation you have to go a bit deeper in RooFit. There is the nice RooAbsReal::getPropagatedError() method which you can use, so if you can write down your background yield in a given bin as a RooFit object you are halfway there.

I have created this example here on how I would hack this together if I’d had a tight deadline approaching:

using namespace RooFit;

// Declare variables x,mean,sigma with associated name, title, initial value and allowed range
RooRealVar x("x", "x", -10, 10);
RooRealVar mean("mean", "mean of gaussian", 1, -10, 10);
RooRealVar sigma("sigma", "width of gaussian", 3, 0.1, 10);

// Build gaussian pdf in terms of x,mean and sigma
RooGaussian gauss("gauss", "gaussian PDF", x, mean, sigma);

// Generate a binned dataset of 1000 events in x from gauss
std::unique_ptr<RooDataSet> data{gauss.generate(x, 10000)};
std::unique_ptr<RooDataHist> dataHist{data->binnedClone()};

// Fit pdf to data
std::unique_ptr<RooFitResult> fitResult{gauss.fitTo(*dataHist, PrintLevel(-1), Save())};

auto const& binning = x.getBinning(); // get the binning for the x variable

// In an extended fit, the nBkg would be part of the model. For this
// example, I just pretend it's 50 % of the events, so the uncertainty comes
// only from the shape of the pdf.
RooRealVar nBkg{"nBkg", "nBkg", 5000, 0, 10000};

// Print bin contents in data and uncertainty on "background" pdf.
// You can take these values and do your math with it.
for(std::size_t iBin = 0; iBin < dataHist->numEntries(); ++iBin) {

   // set a temporary custom range corresponding to the processed bin
   x.setRange("range_for_bin", binning.binLow(iBin), binning.binHigh(iBin));

   // integral of the background pdf in the bin
   std::unique_ptr<RooAbsReal> bkgPdfIntegral{gauss.createIntegral(x, NormSet(x), Range("range_for_bin"))};
   // the pdf has to be multiplied with the total background to get the background yield in the bin.
   RooProduct bkgYield{"bkgYield", "bkgYield", {*bkgPdfIntegral, nBkg}};

   std::cout << iBin << ":   " << dataHist->weight() << "   " << dataHist->weightError()
             << "   " << bkgYield.getPropagatedError(*fitResult) << std::endl;

// Check out the fit results

This is for a simple Gaussian model, in particular if you have an extended fit you need to change this a bit and make sure the nBkg parameter is the same as in the model to propagate the uncertainties right.

Maybe not the most efficient code because it creates a new Integral object for each bin, but it could do the job :slight_smile:


1 Like

Thank you very much!

So I tried to follow along, and modified my bin counting loop. It seems to work well on two methods, I am going to try and verify by modifying each method and running through the whole workflow. Just to make sure, here is the new for-loop:

auto const& binning = x.getBinning(); // For temporary error integration range

for(Int_t k=1;k<hresid->GetN();k++){
      if((hresid->GetPointX(k)>=left_integral_limit) && (hresid->GetPointX(k)<=right_integral_limit)){
        dhData.get(k); // Select bin for "weight()" in next line
        jpsibincounting_bkg+=dhData.weight() - hresid->GetPointY(k); // Calculate background for signal to noise ratio

        // Calculate error based on Root Forum advice (Source
        // Set temporary range:
        x.setRange("range_for_bin", binning.binLow(k), binning.binHigh(k));
        // Integral of the bkg pdf in current bin:
        std::unique_ptr<RooAbsReal> bkgPdfIntegral{pdfA.createIntegral(x, NormSet(x), Range("range_for_bin"))};
        // Scale to background scaling parameter ("MCBkg"):
        RooProduct bkgYield{"bkgYield", "bkgYield", {*bkgPdfIntegral, MCBkg}};
        // Print out results:
        std::cout << "Bin in intrange: " << k << "; Error in dhData: " << dhData.weightError() << "; Error in hresid: " << hresid->GetErrorY(k) << "; Error of bkg fit: " << bkgYield.getPropagatedError(*resultA) << std::endl;


But there is still two things that confuse me… In the std::out-line, I print the error values from the hresid (hresid->GetErrorY(k)) and from the original data (dhData.weightError()), along with the propagated integral error (bkgYield.getPropagatedError(*resultA)). This is one of the results I get (Minus some text output from getPropagatedError()):

Bin in intrange: 46; Error in dhData: 5.80708; Error in hresid: 5.83282; Error of bkg fit: 0.299818
Bin in intrange: 47; Error in dhData: 5.90048; Error in hresid: 5.92573; Error of bkg fit: 0.297746
Bin in intrange: 48; Error in dhData: 6.51377; Error in hresid: 6.53626; Error of bkg fit: 0.295767
Bin in intrange: 49; Error in dhData: 7.65303; Error in hresid: 7.67171; Error of bkg fit: 0.293343
Bin in intrange: 50; Error in dhData: 7.72259; Error in hresid: 7.74108; Error of bkg fit: 0.291591
Bin in intrange: 51; Error in dhData: 8.75627; Error in hresid: 8.77232; Error of bkg fit: 0.290016
Bin in intrange: 52; Error in dhData: 9.11197; Error in hresid: 9.12732; Error of bkg fit: 0.287802
Bin in intrange: 53; Error in dhData: 11.1915; Error in hresid: 11.2027; Error of bkg fit: 0.2861
Bin in intrange: 54; Error in dhData: 10.7355; Error in hresid: 10.7471; Error of bkg fit: 0.286293
Bin in intrange: 55; Error in dhData: 8.19194; Error in hresid: 8.20924; Error of bkg fit: 0.284005
Bin in intrange: 56; Error in dhData: 5.61517; Error in hresid: 5.64196; Error of bkg fit: 0.282252
Bin in intrange: 57; Error in dhData: 5.41577; Error in hresid: 5.44375; Error of bkg fit: 0.28162

On the one hand, the integral errors seem very small, and only differ slightly from one method to another - But I guess that still makes sense, since the fits are all pretty similar in the signals region.

But what mainly concerns me is that the errors from dhData and hresid are not exactly equal… From what we said earlier, we would expect them to be identical, wouldn’t we? The differences are slight, but still…

But besides that, everything seems to work out: I guess I am going to implement this everywhere, and then use the explanation from earlier why the errors are still so similar, at least they are not identical anymore :slight_smile:

Many thanks,

Hi, looks good!

Yes, it’s expected that the uncertainty of the background PDF uncertainty is lower than your statistical uncertainty when your model is good. At least for a single background shape! But since you are trying multiple background shapes, you should also consider taking the difference between them as an additional uncertainty component (at least it’s done in some analysis, where an alternative background shape is used to get a systematic uncertainty for the background estimation).

Anyway, back to the technicality about the different errors in the RooDataHist and the RooHist. The uncertainties are taken from the 68 % Poisson confidence intervals, which are generally asymmetric. If you ask the weightError() from a RooDataHist, it will simply average the upper and lower uncertainty. The RooHist is inheriting from TGraphAsymmErrors, which is averaging them in quadrature.
Hence the different results.

I would go for whatever the RooDataHist does, because this is the standard in RooFit to get numeric results. The RooHist is more a helper class for plotting.

1 Like

Ok, thanks!

Alright, that does sound very sensible - What I am doing at the moment is comparing the results of all the methods at the very end, and then interpreting their mean deviation from my reference method as my systematic error, I was hoping this would be sufficient?

Ok, that makes sense. I will be using the dhData.weightError() then :slight_smile:

Otherwise, I just can’t thank you enough, this was great help :smiley:

What I am doing at the moment is comparing the results of all the methods at the very end, and then interpreting their mean deviation from my reference method as my systematic error, I was hoping this would be sufficient?

Yes, that sounds good.

You’re welcome!

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.