AddOverallSys() usage and meaning

Hi,

We are trying a simple example of adding a 10% systematic on the background. We have a BDT score distribution for signal and one for background. We want to calculate the limits on the BR. Without sys, everything is ok. But when we do:

backg.AddOverallSys(“bkg_unc”,0.9, 1.1);

it does not seem to affect the result, and the fit of alpha is centered at zero, but the width is also very small. We have increase the values of the systematic to crazy values, and still we get the same behavior. Strangely, I played with very small values of the uncertainty (0.01%) and then I get an alpha of zero and a width of one. This lead me to how we are generating our “data”, maybe it follows the background too well so adding a systematic does to make a sense? Should I treat this differently?

I have read all the entries on this subject, but I haven’t been able to understand this (hopefully is something silly).

I have attached the output of our workspace, my script and a plot with the signal, background and data were are model is based on.

I will appreciate a lot for some guidance.

Best regards,
Francisca.



callforLimits.cpp (12.6 KB)
BDT_response.pdf (17.4 KB)

Hi Francisca,

Thanks for the post and welcome to the ROOT Community!

I add in the loop our expert, @jonas , however, in order to speedup feedback, I would suggest to try and reduce the example to something even smaller if possible, ideally a codelet of bunch of lines.

Cheers,
Danilo

Dear Danilo,

Many thanks. Here is the script as simpler as possible. Also, I have uploaded the output from this. Hope this helps!

Many thanks in advance.

Best regards,
Francisca.

callforLimits_basic.cpp (6.1 KB)
output.txt (23.8 KB)

Dear @Jonas,

An update on this. We do not seem to find the problem (from the script side at least) and maybe the problem is more fundamental.
It seems that if I scale signal to be 1/4 of the background, the nuisance parameter behaves the same way (value centered at zero but the width is very close to zero, not one). If I scale the background to be 1/4 the signal, the nuisance parameter behaves the same way.
I have also tried with normalizing the signal, background and simulated (observed) data, and in this way the uncertainty gave me the right values, centered at zero and a width close to 1. (Of corse this is not useful to calculate limits).
I have run out of ideas. I would appreciate a lot your input.

Best regards,
Francisca.

treeLFV_sgn_ntuple.root (1.2 MB)
callforLimits_basic.cpp (6.8 KB)

Dear all,

Our objective is to see how the 95% CL upper limit on the Branching ratio of a process is affected by, for example, a 20% uncertainty on the background yield.
we are adding this by:
backg.AddOverallSys(“bkg_unc”, 1.0 - error_bkg, 1.0 + error_bkg);

where error_bkg = 0.2.
In order to calculate the limits, there is a first fit that fixes the poi (in our case the Branchinga ratio) and the nuisance paremeters (in our case is alpha_bkg_unc). This fit is giving the results:

BR = 8.75405e-13 +/- 3.87968e-06 (limited)
alpha_bkg_unc = 0.0184738 +/- 0.0126915 (limited)

So the BR, is behaving correctly since the generated dataset is assuming background only. The nuisance is strange because the error is not close to one.

Again asking for help, I think, from all the above the fundamental question is:
Is it ok if the nuisance parameter does not fit to the gaussian constraint centered at 0 with a width of 1?
The only way that this work in my settings, is if I set the uncertainty toa very low value of 0.01% (on the background for example).
Could it be that assuming to have a 10% or a 20% etc uncertainty on the background is just overestimated given the simulated data? and this is just telling that, that the background model is very well adjusted to the simulated data?

Thanks in advance.
Best regards,
Francisca.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.