Upper limit calculation with uncertainty using Asymptotic Calculator (with CLs)

Dear experts,

I am attempting to use RooStats to set the expected limit on a cross section, based on Monte Carlo studies.
I encounter problems when I try to include the systematic uncertainties on the luminosity.
The model is constituted by a 2nd order polynomial for the background, and a Crystal Ball with fixed parameters for the signal.
I use the factory syntax, in Python environment.

If I run the code without including the systematics, I do get a result that makes sense (this, to say that I am not 100% sure it’s the correct result, but at least is totally reasonable).
When I incorporate the machinery for the systematics, I obtain the same result, and I don’t know if I’m doing something wrong or if that’s the expected behaviour.

The essential parts of the codes are the following.
At the end of the post, I’ll paste the final part of the script, entitled to the actual limit extraction, if that was useful for debugging.

ALP_InvM_sq = w.factory('ALP_InvM_sq['+str(myrange[0])+','+str(myrange[1])+']')

w.factory('lumi_nom[511.0]')
w.factory('alpha_lumi[1., 0.0, 2]')
w.factory('prod::lumi(lumi_nom,alpha_lumi)')
w.factory('Gaussian::constr_lumi(alpha_lumi, glob_lumi[1., 0.0, 2], 0.1)') 

w.factory('xsec[0, 0, 10]') #parameter of interest
w.factory('sigeff[0.258]')
w.factory('prod:sig_yield(lumi, xsec, sigeff)')

w.factory('CBShape:sig(ALP_InvM_sq, m0['+str(d["mean"])+'], sigma['+str(d["sigma"])+'], alpha['+str(d["alpha"])+'], n['+str(d["n"])+'])') # d is a dictionary
w.factory('Chebychev:bkg(ALP_InvM_sq, {a0[0, -5, 5], a1[0, -5, 5]})')

# w.factory('bkg_yield_nom[6000, 0, 50000]') # does it make a difference?
w.factory('bkg_yield_nom[6000]')
w.factory('prod::bkg_yield(bkg_yield_nom,alpha_lumi)')
w.factory('SUM::factorymodel_core(sig_yield*sig, bkg_yield*bkg)')
w.factory('PROD::factorymodel(factorymodel_core,constr_lumi)')
pdf = w.pdf('factorymodel')

# ----------------------------------------------------------------------------------------------------------------

# create set of observables (will need it for datasets and ModelConfig later)
pObs = ROOT.RooRealVar(w.var("ALP_InvM_sq"))
obs = ROOT.RooArgSet("observables")
obs.add(pObs)

# create set of global observables (need to be defined as constants?)
w.var("alpha_lumi").setConstant(True)
# w.var("bkg_yield_nom").setConstant(True)
w.var("lumi_nom").setConstant(True)
w.var("sigeff").setConstant(True)
globalObs = ROOT.RooArgSet("global_obs")
globalObs.add( w.var("alpha_lumi") )
globalObs.add( w.var("lumi_nom") )
globalObs.add( w.var("sigeff") )
# globalObs.add( w.var("bkg_yield_nom") )

# create set of nuisance parameters
nuis = ROOT.RooArgSet("nuis")
nuis.add( w.var("glob_lumi") )
nuis.add( w.var("a0") )
nuis.add( w.var("a1") )
nuis.add( w.var("bkg_yield_nom") )

# create set of parameters of interest (POI)
poi = ROOT.RooArgSet("poi")
poi.add( w.var("xsec") )

# ----------------------------------------------------------------------------------------------------------------

# bkg-only model
b_modelNM = ROOT.RooStats.ModelConfig("B_modelNM", w)
b_modelNM.SetPdf(w.pdf("factorymodel"))
b_modelNM.SetObservables( obs )
b_modelNM.SetParametersOfInterest( poi )
w.var("xsec").setVal(0.0)
b_modelNM.SetGlobalObservables( globalObs )
b_modelNM.SetNuisanceParameters( nuis )
b_modelNM.SetSnapshot(ROOT.RooArgSet(w.var("xsec"))) 

# bkg+signal model
sb_modelNM = ROOT.RooStats.ModelConfig("S+B_modelNM", w)
sb_modelNM.SetPdf(w.pdf("factorymodel"))
sb_modelNM.SetObservables( obs )
sb_modelNM.SetParametersOfInterest( poi )
w.var("xsec").setVal(1) # is any number ok?
sb_modelNM.SetGlobalObservables( globalObs )
sb_modelNM.SetNuisanceParameters( nuis )
sb_modelNM.SetSnapshot(ROOT.RooArgSet(w.var("xsec")))
  1. Is there any obvious error?
  2. Are the nuisance parameters and the global observables properly set? In particular alpha_lumi and glob_lumi.
  3. Is it normal that I extract the same limit (in particular the median limit, see the very last block of code at the end of the post) as without the nuisance parameters?
  4. I performed some tests, and I saw that if I set the starting value of alpha_lumi to any other value (0.5, 1.5…), then it and glob_lumi get have that same value at the end of fit. Is this correct? Is glob_lumi supposed to converge to the same starting value of alpha_lumi and not being modified?

Thanks for any help! :slight_smile:

The rest of the code is here:

# ----------------------------------------------------------------------------------------------------------------

profll = ROOT.RooStats.ProfileLikelihoodTestStat(sb_modelNM.GetPdf())

ac = ROOT.RooStats.AsymptoticCalculator(dh_bg, b_modelNM, sb_modelNM)
ac.SetOneSided(True) # KLUDGE -- should want one sided (true) for limit
ROOT.RooStats.AsymptoticCalculator.SetPrintLevel(-1)

asympResult = ac.GetHypoTest()
asympResult.SetPValueIsRightTail(False) # appears to do nothing?!
asymp_pb = 1. - asympResult.AlternatePValue() # KLUDGE!!! Needs 1 -   
asympResult.SetPValueIsRightTail(True)
asymp_psb = asympResult.NullPValue()

print( "Results based on asymptotic formulae:" )
print( "psb = ", asymp_psb )
print( "pb  = ", asymp_pb )

asymp_clb = 1. - asymp_pb
asymp_clsb = asymp_psb
asymp_cls = 9999.

if asymp_clb > 0: 
    asymp_cls = asymp_clsb/asymp_clb
    print( "cls = ", asymp_cls )

# create hypotest inverter passing the desired calculator (hc or ac)
calc = ROOT.RooStats.HypoTestInverter(ac)
calc.SetVerbose(False)
calc.SetConfidenceLevel(0.95)

useCLs = True
calc.UseCLs(useCLs)
if useCLs:
    profll.SetOneSided(True)

npoints = 200  # number of points to scan

# min and max for scan (better to choose smaller intervals)
poimin = w.var("xsec").getMin()
poimax = w.var("xsec").getMax()
poimin = -10
poimax = 10
print( "Doing a fixed scan  in interval : ",poimin," , ",poimax)
calc.SetFixedScan(npoints, poimin, poimax)
r = calc.GetInterval()

upperLimit = r.UpperLimit()
ulError = r.UpperLimitEstimatedError()
print( "The computed upper limit is: ", upperLimit," +/- ",ulError)

# compute expected limit
print("Expected upper limits using b-only model : ")
print("median limit = " ,      r.GetExpectedUpperLimit(0) )
print("med. limit (-1 sig) " , r.GetExpectedUpperLimit(-1))
print("med. limit (+1 sig) " , r.GetExpectedUpperLimit(1))
print("med. limit (-2 sig) " , r.GetExpectedUpperLimit(-2))
print("med. limit (+2 sig) " , r.GetExpectedUpperLimit(2))
1 Like

I think @moneta can help you with this RooStats question .

Hi @Konwel,

  • Personally, I would write the constraint as Gauss(glob | alpha, sigma), but that shouldn’t be a problem since the Gaussian is symmetric in x and mean. It’s probably just a convention.
  • Next, you don’t need to limit the range of the global observable. It can well run from -inf to inf, as any “x”, i.e. observable for a gaussian might. I still don’t think that this is a problem, because when the gaussian gets integrated over the limited range, you simply get a probability that’s less than 1, but RooFit will divide by that integral to normalise the PDF properly.
  • The part where you set alpha_lumi constant is probably wrong, unless you want to do a fit without systematics. alpha needs to move in order for the systematic uncertainties to impact the fit.
  • The set of global observables should only contain glob_lumi, unless I missed one. In general, global observables are the observables of the constraint terms. So, for two systematics, the likelihood would look like this:
LH_con = LH(x, y, ... | alpha_1, alpha_2, ..., <unconstrained parameters>)
* Gauss(glob_1 | alpha_1, sigma_1) * Gauss(glob_2 | alpha_2, sigma_2)

Here, x, y, ... are observables, the alphas are parameters (and they are shared between likelihood and constraint terms!), glob_i are the global observables (only in the constraints), and the sigma_i are the “widths” of the constraints (also only in the constraint terms).

  • lumi_nom by the way is just a constant. Set it to const, and don’t put it in any set.
  • The nuisance parameters are parameters of the likelihood, so all the alpha_i that are getting constrained and also all other parameters without constraints should be added to this set. Parameters are the things that should be optimised by the fitter. You definitely should not add glob_lumi to this set!
  • Most of the time, you will get a different limit. That’s because the higher the uncertainties of your model, the weaker the limit. So, if you switch on a systematic, the limit will get worse. When setting up the model as I outlined above, switching on a systematic is just to set the parameter to “not const”. That’s nice by the way for studies on how much certain parameters impact the uncertainties of your model.
  • No, this is an artifact of the wrong setup. Follow the instructions above to correctly disentangle parameters and observables and constants.

I hope that I didn’t miss anything. Let me know if it works.

Hi @StephanH,

thanks for the quick answer!

  • I set alpha_lumi as global observable and glob_lumi as nuisance, and set the Gaussian as
    Gaussian::constr_lumi(alpha_lumi, glob_lumi, 0.1)
    trying to mimic what you recommended here (but the naming scheme there was highly non-trivial):
    Upper Limit calculation - #6 by StephanH
    and what is in this example:
    ROOT: ROOT Reference Documentation
    but I may have got things mixed up. But in the end it may be just a naming scheme, I’ll make it more conventional for the sake of readability.
  • I saw other examples where it is limited, but I see your point. In my case, I wanted to set at least a lower limit cause negative values are non-physical.

  • If alpha_lumi acts, in my script, as global observable, shouldn’t it be set as a constant? I think I saw that in the two examples I linked above, in particular:

    gEff is global observable that should be constant, and is has the value of the monte carlo efficiency.

    What am I missing?

  • sigeff is just a constant, too, like lumi_nom?

  • Thanks for the likelihood example, is very useful!

  • Just to be rock sure: when you say should be added to this set, you are referring to the nuisance parameters set, right? And the same when alphas are parameters: for parameters here we mean nuisance parameters, correct?
    (But with my reversed naming scheme, glob_lumi (the mean of the Gaussian) is a nuisance parameter, isn’t it?)
    On the same line, just for confirmation: bkg_yield_nom, a0, and a1 all belong to the unconstrained parameters, i.e. they are to be set in the nuisance parameters set, right?
  • In practical terms, this means that I can turn off a systematic simply setting its corresponding “alpha_i” value to constant, while keeping it amongst the nuisance parameters?

Thanks really a ton for the thorough answer! :slight_smile: I’ll adjust the naming scheme for clarity, apply your suggestions, and let you know if it works.
(It may be that in the end the this particular systematic will have very little or no effect, because it affects signal and background in the same way, so it’s basically an overall scale, to which a fit is very little sensitive.)

Hi @StephanH,

I applied the modifications you suggested. First, here’s the updated code:

('ALP_InvM_sq['+str(myrange[0])+','+str(myrange[1])+']')

w.factory('lumi_nom[511.]')
w.factory('alpha_lumi[1., 0., 2.]')
w.factory('prod::lumi(lumi_nom, alpha_lumi)')
w.factory('Gaussian::constr_lumi(glob_lumi[1., 0., 2.], alpha_lumi, 0.1)') # smearing

w.factory('xsec[0, '+str(xsec_min)+', '+str(xsec_max)+']') #POI
w.factory('sigeff[0.258]') 
w.factory('prod:sig_yield(lumi, xsec, sigeff)')

w.factory('CBShape:sig(ALP_InvM_sq, m0['+str(d["mean"])+'], sigma['+str(d["sigma"])+'], alpha['+str(d["alpha"])+'], n['+str(d["n"])+'])')
w.factory('Chebychev:bkg(ALP_InvM_sq, {a0[0, -5, 5], a1[0, -5, 5]})')

w.factory('bkg_yield_nom['+str(int(nbg))+', 0, 50000]')
w.factory('prod::bkg_yield(bkg_yield_nom,alpha_lumi)') # tentative try for applying lumi smearing to bkg
w.factory('SUM::factorymodel_core(sig_yield*sig, bkg_yield*bkg)') # core model
w.factory('PROD::factorymodel(factorymodel_core,constr_lumi)') 
pdf = w.pdf('factorymodel')

# ---------------------------------------------------

# create set of global observables (need to be defined as constants!)
globalObs = ROOT.RooArgSet("global_obs")
globalObs.add( w.var("glob_lumi") )
w.var("glob_lumi").setConstant(True)

# create set of nuisance parameters
nuis = ROOT.RooArgSet("nuis")
nuis.add( w.var("bkg_yield_nom") )
nuis.add( w.var("alpha_lumi") )
nuis.add( w.var("a0") )
nuis.add( w.var("a1") )

# create set of parameters of interest (POI)
poi = ROOT.RooArgSet("poi")
poi.add( w.var("xsec") )

# fix all other variables in model: everything but observables (glob and non), POI, and nuisance parameters must be constant
w.var("lumi_nom").setConstant(True)
w.var("sigeff").setConstant(True)
#     w.var("alpha_lumi").setConstant(True) # Uncomment this if you want to NOT apply the systematics!!!

# ---------------------------------------------------

# bkg-only model
b_modelNM = ROOT.RooStats.ModelConfig("B_modelNM", w)
b_modelNM.SetPdf(w.pdf("factorymodel"))
b_modelNM.SetObservables( obs )
b_modelNM.SetParametersOfInterest( poi )
w.var("xsec").setVal(0.0)
b_modelNM.SetGlobalObservables( globalObs )
b_modelNM.SetNuisanceParameters( nuis )
b_modelNM.SetSnapshot(ROOT.RooArgSet(w.var("xsec"))) 

# bkg+signal model
sb_modelNM = ROOT.RooStats.ModelConfig("S+B_modelNM", w)
sb_modelNM.SetPdf(w.pdf("factorymodel"))
sb_modelNM.SetObservables( obs )
sb_modelNM.SetParametersOfInterest( poi )
w.var("xsec").setVal(starting_value_SB)
sb_modelNM.SetGlobalObservables( globalObs )
sb_modelNM.SetNuisanceParameters( nuis )
sb_modelNM.SetSnapshot(ROOT.RooArgSet(w.var("xsec")))

Now I do see that, including the systematic on the luminosity (i.e. not setting alpha_lumi as a constant), my UL gets higher (namely, for what is worth: it goes from 0.44 to 0.45, it sounds like a reasonable increasing).
On the other end, I see something that looks fishy if I print out the variables at the end of the fit procedure (whose code is at the end of the first post):

lumi_nom = 511.0
alpha_lumi = 0.11933745646544613
glob_lumi  = 1.0
xsec = 1.0
sigeff = 0.258
m0 = 24.694677688467173
sigma = 0.5825297148602651
alpha = 0.4859911910785825
n = 29.99920894441112
a0 = -0.6524426154404042
a1 = 0.13089330709529534
bkg_yield_nom = 49999.9999835807
  • bkg_yield_nom goes to its upper limit, and this sounds like something is not working properly in the fit.
  • alpha_lumi goes down from 1.0 to 0.1, i.e. it looks like the fit is willing to pay the price of 9 Gaussian sigmas to lower this down, and if set the upper limit of bkg_yield_nom up to 500’000, I get
    bkg_yield_nom = 129283.87829187782
    alpha_lumi = 0.046845849033066655
  • The bkg yield and the luminosity are extremely correlated (they are not 100% correlated only because of the presence of the signal), is this why the fit is so “degenerate”? I am surprised that the fit is willing to lower alpha_lumi so much, 9 sigmas look like a lot. Am I missing something?
  • Slightly unrelated: xsec is still at its initial value, the one set while configuring the S+B model. Is this normal/ok?

Thanks again!

Hi,

some more answers to your second-to-last post:
1.

You can do that, but that’s against the conventions. People will assume that alpha is a parameter. Obviously, you can name things as you like (I just read again that you want to change the names).
But just to be sure: A parameter is affecting your main likelihood, a global observable is not. So, the things that’s called alpha should change something like the global normalisation of the MC, etc.

  1. The global observable just needs to have enough range that when throwing toy MC, the Gaussian can have any reasonable value. Let’s say ± 4 sigma from the centre should be ok.

  2. I don’t remember exactly if global observables have to be constant. I think they don’t have to because RooStats sets them constant. Anyway, if you see a global observable change its value after the fit, something is wrong in the model.

  3. sigeff is probably a constant. Any number that you input from an external source enters as a constant, and then you can add a parameter to shift it a bit. Let’s say you want the signal efficiency to move a bit, so you use (1 + alpha_sigeff) * sigma_sigeff(const) * sigeff(const) as scaler, and a constraint term that keeps alpha close to 0 ± 1.
    You can obviously also set sigeff to some value, and write a Gaussian that’s not centred around 1 with a sigma of sigma, but then you cannot directly read off the sigmas from the alpha parameter. That’s up to you.

  4. “Should be added to this set” refers to nuisance parameters, indeed. Anything that changes the value of the likelihood is a parameter. There is the parameter of interest, the one you measure, and all others are nuisance parameters.
    Whether glob_lumi is a parameter you have to figure out yourself. Just check if it’s in the likelihood before you add constraint terms.
    Also for the other parameters you asked about: If they are changing something in the likelihood, and you don’t want them to be constant, but the fitter should wiggle them to find the optimal value, they are nuisance parameters.

  5. Yes, constant parameters are not fitted even if they are in the set of nuisance parameters.

Replies to your second post:

  1. Is it correct that you tried to implement:
    Total = alpha_lumi * N_bkg + lumi*alpha_lumi * sigeff * xsec_sig

  2. For the sets, just use ModelConfig.Print() after setup to check if all parameters are where they should be.

  3. What looks strange to me is the background yield. It’s right at the limit, which is usually a sign that you got some numbers wrong. How many events do you have in the data?

  4. It’s not surprising that alpha_lumi and bkg_yield are anti-correlated. When you increase one, the other can decrease, and you get the same number of events. That’s exactly what you want for systematic uncertainties. If you cannot measure a value exactly because of this correlation, the uncertainty of your fit increases. That’s a systematic uncertainty being propagated into your fit result.
    The only thing that keeps the fit model stable in that situation is the constraint term for alpha. If that’s reasonably small, you can live with the correlation.

  5. 9 sigma for alpha looks wrong. Check if you really implemented the formula from 1.

  6. If you did a bkg-only fit, xsec should have been constant. It won’t change in that case. If you do the signal fit, it should change, I guess. Use <name of arg set>.Print("V") to print all the contents of the set with errors. If they have errors, they usually came from a fit.

Hi @StephanH,

thanks a lot for your replies! :slight_smile:

Yes, correct. More precisely, I tried to implement this pdf:
MODEL_PDF = alpha_lumi * N_bkg * BKG_PDF + lumi * alpha_lumi * sigeff * xsec_sig * SIGNAL_PDF

I agree that the background yield looks suspicious.
I prepared a minimal working example that can be quickly run (execution time of few seconds).
It would be very much appreciated if you could take a look at it, cause I can’t spot any error/bug.
Five “print checkpoints” are included, printing the information on the models and arg sets as you suggested, in particular after setting the models, after running the AsymptoticCalculator, and after running the HypoTestInverter.

  1. Does the value of starting_value_SB (line 21) matter?
    From my tests, I saw that it has little or no influence on the CLs median upper limit (that is approx 0.44).
    This number enters in the signal+bkg model definition (took this from an example), when I set its value with w.var("xsec").setVal(starting_value_SB) (line 118).
    Is this wrong? If yes, how to I configure the signal+bkg model properly?
  2. As mentioned above, I print the parameters in multiple points, before and after the fit.
    I see that xsec is never fitted, and it always remains at the above starting_value_SB value.
    I understand this is a feature of the way I configure the latest model, but I don’t know if it is the proper way.
    Is this correct to extract the proper CLs Upper Limit?
    Is this correct to get a proper fit result? If no, do I care about the fits results here (as the fits are performed internally by the AsymptoticCalculator)?
  3. Does it make sense to have a negative starting point for the scanning of the cross section (line 13), given that this is defined as positive (line 16 & 49)?

README.txt (185 Bytes)
DataForMinExample.root (71.4 KB)
MichaelsOutputOfMinWorkExample.txt (32.4 KB)
MinWorkingExample_Limits_Systematics_Lumi.py (8.3 KB)

Best regards, and thanks a lot in advance,
Michael

The model seems to work.

I just tested if your systematic works by running with different constraints:
I ran Gaussian::constr_lumi(glob_lumi[1., 0., 2.], alpha_lumi, 0.5), that’s a killer uncertainty, and get

('The computed upper limit is: ', 8.936571750112153, ' +/- ', 0.0)
Expected upper limits using b-only model : 
('median limit = ', 10.0)
('med. limit (-1 sig) ', 0.34233254960014653)
('med. limit (+1 sig) ', 10.0)
('med. limit (-2 sig) ', 0.29569517510941945)
('med. limit (+2 sig) ', 10.0)

in comparison to a normal Gaussian::constr_lumi(glob_lumi[1., 0., 2.], alpha_lumi, 0.1)

('The computed upper limit is: ', 0.28212684189094356, ' +/- ', 0.0)
Expected upper limits using b-only model : 
('median limit = ', 0.4534518384917042)
('med. limit (-1 sig) ', 0.3238696973723586)
('med. limit (+1 sig) ', 0.6500783650653807)
('med. limit (-2 sig) ', 0.2395790986805837)
('med. limit (+2 sig) ', 0.9108596230631435)

If you constrain to 0.01, the median expected limit goes down to 0.445, so it gets a bit better, but the systematic is effectively switched off already.

Replies to your questions:

  1. Yes and no. The starting value is the starting value for the global fit, which is used as a reference for a likelihood-ratio test. If this fit is unstable, and depends on the starting value, this makes a difference. If it’s rock stable, it doesn’t matter what you set it to.
    NB: It’s always good to kick the POI away a bit to test that your fit model actually can measure xsec reliably.
    NB 2: You wondered about it being constant. Obviously, RooStats has to set it floating for the signal+background fit, and to set it constant for the background-only model. So whether you set it const or not doesn’t matter, because this has to be reset, any way.
  2. xsec is fit, but the calculators restore the value after the fit. See this part here:
   1  a0          -6.51486e-01   1.92884e-02   4.40640e-04   2.06338e-01
   2  a1           8.77595e-02   1.94155e-02   4.39814e-04  -6.37183e-01
   3  alpha_lumi   9.99723e-01   1.02076e-01   1.45717e-03  -7.98701e-02
   4  bkg_yield_nom   6.11564e+03   6.30595e+02   1.63518e-04  -4.68837e-01
   5  xsec         6.48529e-09   7.37292e-02   1.97459e-02** at limit **

That’s the fit that probably depends on the starting value a bit.
3. Yes and no again. If you think that the cross section can reasonably be negative (e.g. an interference term that is destructive), you might define it negative. In all other cases, I would start at 0. It seems, though, that for the signal fit, the following limits were applied:

 PARAMETER DEFINITIONS:
    NO.   NAME         VALUE      STEP SIZE      LIMITS
     1 a0           0.00000e+00  1.00000e+00   -5.00000e+00  5.00000e+00
     2 a1           0.00000e+00  1.00000e+00   -5.00000e+00  5.00000e+00
     3 alpha_lumi   1.00000e+00  2.00000e-01    0.00000e+00  2.00000e+00
     4 bkg_yield_nom   6.11400e+03  3.05700e+03    0.00000e+00  5.00000e+05
     5 xsec         1.00000e-01  5.00000e-02    0.00000e+00  1.00000e+01

I don’t know who set them, but that’s what was in use.

One more question:
Would you mind if I converted this example to a tutorial at some point in the future? As you experienced yourself, we need an example that shows how to add a constrained systematic uncertainty.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.

Hi @StephanH,
(delayed) many thanks for the thorough answers :slightly_smiling_face:
I’m totally fine with this example to be turned into an example! It would be actually very helpful. So, very gladly.