# Manipulate the likelihood

Hi all,

to determine a significance, I compare the NLL differences for both, my nominal model and a null hypothesis (i.e. the nominal model with N_sig forced to be zero). Unfortunately, this only includes the uncertainties from the fit itself (statistical uncertaintes). I now want to determine a significance, where both uncertainties are included, the statistical and systematic one. What I read is that this could be accomplished by convoluting the likelihood with a Gaussian function, with the width being the relative systematic uncertainty.

Here is a toy example of my mass model:

``````val_mean = 5366
val_sigma = 20
val_lambda = -0.001

val_nsig = 30
val_nbkg = 40

obs_mass = ROOT.RooRealVar('obs_mass', 'obs_mass', 5000, 5600)

par_mean = ROOT.RooRealVar('par_mean', 'par_mean', val_mean, 5300, 5400)
par_sigma = ROOT.RooRealVar('par_sigma', 'par_sigma', val_sigma, 10, 30)
par_lambda = ROOT.RooRealVar('par_lambda', 'par_lambda', val_lambda, -0.001, 0)

par_nsig = ROOT.RooRealVar('par_nsig', 'par_nsig', val_nsig, 0, 100)
par_nbkg = ROOT.RooRealVar('par_nbkg', 'par_nbkg', val_nbkg, 0, 1000)

# Signal model
pdf_gaus = ROOT.RooGaussian(
'pdf_gaus', 'pdf_gaus', obs_mass, par_mean, par_sigma
)
# Background model
pdf_exp = ROOT.RooExponential(
'pdf_exp', 'pdf_exp', obs_mass, par_lambda
)

'pdf_combined', 'pdf_combined',
ROOT.RooArgList(pdf_gaus, pdf_exp), ROOT.RooArgList(par_nsig, par_nbkg)
)

data = pdf_combined.generate(ROOT.RooArgSet(obs_mass), val_nsig + val_nbkg)
``````

I know that I am able to receive the NLL via `pdf_combined.createNLL(data)`:

``````nll = pdf_combined.createNLL(data)
minuit = ROOT.RooMinuit(pdf_nll)
fitres_minuit = minuit.save()
``````

So is there a convenient way to manipulated the likelihood (or NLL)?

Timon

P.S.: Basically the same question was asked about 10 years ago (Likelihood distribution). But like shencp, I do not really understand the answer Wouter provided. Is multiplying a Gaussian to the PDF the same as convolving the likelihood with a Gaussian?

Hi @Kecksdose,

Wouter gave the correct answer, and he also gave you code how to do it.
You indeed do not convolute the likelihood. What you do is to “profile” it. That means that you compare the likelihood from a global fit with all parameters floating to a likelihood where you set the parameter of interest to user-defined values. It is therefore not being optimised, and the likelihood is lower (or equal) to the global-fit likelihood. This likelihood difference is what you use to determine uncertainties.

If you don’t constrain any parameter, but the likelihood has parameters, you are by the way also evaluating a systematic uncertainty. You can, however, incorporate extra knowledge in the likelihood model, maybe because you know from external measurements that a parameter should be in a certain confidence interval. In this case, you can constrain the nuisance parameters to a confidence interval by multiplying the likelihood with a Gaussian distribution.

To get an idea how to evaluate a profile, see
https://root.cern/doc/master/rf605__profilell_8C.html

For a profile-likelihood calculation, you can use the ProfileLikelihoodCalculator. See
https://root.cern/doc/master/rs102__hypotestwithshapes_8C.html

Hi @StephanH,

thanks for your response and additional information. I also had a deeper look at the profile-likelihood tutorial you provided. Unfortunately, the results I receive are counter intuitive in some way. When adding external uncertainties as constraints, the likelihood becomes tighter. As the significance is connected to likelihood ratio between the free fit and a fit where one parameter is set to zero, this results in a higher significance by including another uncertainty. From what I learned from Wouters answer is that the following shoud work:

``````# par_nsig is the parameter I want to constrain
# [...]

# First, calculate likelihood from complete model and minimise:
nll = pdf_combined.createNLL(data)

# Add constraint and get likelihood
syst_uncert = 0.2  # 20%
pdf_gaus_const = ROOT.RooGaussian(
'pdf_gaus_const', 'pdf_gaus_const', par_nsig,
ROOT.RooFit.RooConst(par_nsig.getVal()), ROOT.RooFit.RooConst(par_nsig.getVal() * syst_uncert)
)
pdf_with_const = ROOT.RooProdPdf('pdf_with_const', 'pdf_with_const', pdf_combined, pdf_gaus_const)
nll_with_const = pdf_with_const.createNLL(data)

# Plot
nll_frame = par_nsig.frame()
nll.plotOn(
nll_frame, ROOT.RooFit.ShiftToZero(), ROOT.RooFit.Name('stat'), ROOT.RooFit.LineColor(ROOT.kRed)
)
nll_with_const.plotOn(
nll_frame, ROOT.RooFit.LineColor(ROOT.kBlue), ROOT.RooFit.ShiftToZero(), ROOT.RooFit.Name('stat+syst')
)

legend = ROOT.TLegend(0.3, 0.6, 0.7, 0.8)
``````

This results in the likelihoods shown below. My expectation would be the the blue curve should lie below the red one.

nll_nominal.pdf (14.7 KB)
nll_nominal.pdf

This is exactly what you want!!

1. Let’s say that you have a parameter that you don’t know anything about. You let it float completely freely (no constraints), and it will have maximal impact on your errors (it will make them bigger).
This is a fit with unconstrained (super pessimistic) systematic uncertainties.

2. Now, you realise that you actually know something about the parameter, namely that it should be centred around zero with a sigma of 0.5. This knowledge could e.g. come from a paper where somebody measured this parameter.
Now that you know this, you can add this knowledge to your likelihood model by multiplying with `Gauss(parameter | 0, 0.5)`. Now, your model becomes more powerful, because it can use the extra knowledge that you just provided to reduce the uncertainties that are caused by this parameter. The likelihood parabola gets more narrow, your uncertainties fall.
Multiplying with a Gaussian like this is equivalent to saying: “I know that this parameter must be at 0 +/- 0.5 with 68% confidence.”
You do a fit with a constrained systematic uncertainty.

3. What’s missing is the data-only uncertainty. For this, fit the parameters, and set all to constant. Then plot the NLL.
Now, the only source of uncertainty in your model is due to data statistics because all the systematics-related parameters have been “disabled”.

Your plot looks good, but you should label the curves “Without external constraint”. So red would be the maximal systematic uncertainty, blue would be a constrained systematic uncertainty, and data uncertainty would be something like

``````par1.setConstant()
par2.setConstant()
...
...
nll.plotOn(nll_frame, ROOT.RooFit.ShiftToZero(), ROOT.RooFit.Name('stat only'), ROOT.RooFit.LineColor(ROOT.kGreen))
``````

Hi @StephanH,

thanks for the additional clarification, I see that an additional constraint improves my fit.

Maybe I expressed myself a little incomprehensibly, which I’m sorry for. The real problem I (and also @shencp) face is another one. Lets say I want to determine a signal yield `nsig`, which is a parameter in my fit model. From the nominal (unconstrained) fit, I get the result `nsig = 10 +- 2`. To determine a significance of the value, I can compare the likelihood of the nominal fit to a fit where `nsig` is forced to be zero (aka “null hypothesis”). What I found out during my analysis is the there are additional 10% of uncertainties on `nsig`. This does not mean that that `nsig` should be constraint in the fit between `9` and `11`, but that the real uncertainty of `nsig` is greater than 2, like `nsig = 10 +- 2 (stat) +- 1 (syst)`. To be able to determine a significance for the combined uncertainties, I want to fold the profile likelihood with a Gaussian, which has the width of the systematic uncertainties, to be able to determine a significance in the same way as before.

A naive way to determine a significance of an observation would be to divide the value by its uncertainty, i.e. 10/2 = 5. Including the systematic uncertainty, one could calculate something like `10/(sqrt(2^2 + 1^2)) = 4.47`, so the significance becomes smaller. This is also what I would expect when performing a likelihood ratio with the folded Gaussian distribution.

Ok, I guess this is what could work:

• You define an additional nuisance parameter, the signal uncertainty.
• When constructing the model, you don’t do
``````nsig * signalPDF,
``````

but you do

``````nsig * uncertaintyScaler * signalPDF
``````

This can be done with a RooProduct, and should look something like (untested, forgive my typos):

``````sigScaler = ROOT.RooRealVar("sigScaler", "sigScaler", 1, 0, 2)
nSig_scaled = ROOT.RooProduct("name", "title", nsig, sigScaler)
RooAddPdf model(..., ..., RooArgSet(sig, bkg), RooArgSet(nSig_scaled, nBkg))

pdf_gaus_const = ROOT.RooGaussian(
'pdf_gaus_const', 'pdf_gaus_const', sigScaler,
ROOT.RooFit.RooConst(1), ROOT.RooFit.RooConst(0.1)
)
pdf_with_const = ROOT.RooProdPdf('pdf_with_const', 'pdf_with_const', model, pdf_gaus_const)
``````

If you now try to measure nsig, you will see that its errors blow up because the sigScaler is anti-correlated to nsig, because the fit essentially solves the equation:

``````nData = nsig * sigScaler * nSig_predicted + nbkg * nBkg_predicted
``````

If you don’t constrain the sigScaler, it will mean that the fit cannot measure any of the two, which is the case of a maximally destructive systematic uncertainty. But if you do constrain the sigScaler, you can measure nsig with an uncertainty, but this uncertainty will be larger than just the statistical.
You can play with the strength of the constraint to be sure that your systematic uncertainty indeed does what you want, and you should also be able to set the sigScaler constant and to one, which should exactly yield the statistical uncertainty.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.