Follow up on previous "Manipulate the likelihood" post

Dear Experts,

I’m following up on this answer given by @StephanH in this post Manipulate the likelihood.

To include a systematic constraint on the signal yield, nsig, the proposed solution was

sigScaler = ROOT.RooRealVar("sigScaler", "sigScaler", 1, 0, 2)

nSig_scaled = ROOT.RooProduct("name", "title", nsig, sigScaler)

RooAddPdf model(..., ..., RooArgSet(sig, bkg), RooArgSet(nSig_scaled, nBkg))

pdf_gaus_const = ROOT.RooGaussian('pdf_gaus_const', 'pdf_gaus_const', sigScaler, ROOT.RooFit.RooConst(1), ROOT.RooFit.RooConst(0.1))

pdf_with_const = ROOT.RooProdPdf('pdf_with_const', 'pdf_with_const', model, pdf_gaus_const)

Within the context of the original problem posted the signal yield nsig was 10 \pm 2 (stat.) events but with an additional systematic error of 1 event (10%) that had to be added, and the above was the proposed solution. It’s not clear if @Kecksdose accepted the answer as there was no follow up. But my question is about the parameterization of the constraint Gaussian. Specifically

  1. For sigScaler = ROOT.RooRealVar("sigScaler", "sigScaler", 1, 0, 2), is the initial value given by the additional 1 event systematic, and the min and max values given by the original statistical error? If yes, how is sigScalar parameterized if using asymmetric errors? If not, what is the general method of parameterizing this variable?

  2. For pdf_gaus_const = ROOT.RooGaussian('pdf_gaus_const', 'pdf_gaus_const', sigScaler, ROOT.RooFit.RooConst(1), ROOT.RooFit.RooConst(0.1)), is the constant mean the given systematic error and the constant width the error as a fraction of nsig? If true is this a good general prescription for including constraints on nsig in this way?

Thanks.

Hi @Lepton86,

The sigScaler scales the amount of signal events (it’s multiplicative). When you use (... 1, 0, 2), the scale factor is initialised to 1, and allowed to vary within [0, 2]. The total signal then is
nSig_scaled = sigScaler * nsig
When initialised to 1, that’s obviously the model where no changes are happening. If the fitter decides that the sigScaler has to be changed to a value higher or lower than 1, a systematic uncertainty starts to have an effect on the fit.
Here, the min and max values are “hard limits”, i.e. the variable is simply not allowed to go negative. These are not constraints.

Indeed, the “constraint PDF” is the one that implements the “systematic uncertainy”. The constant mean of 1 is not the error, the sigma of 0.1 is. If you have a likelihood like this:

L_constraint(x | parameters) = L(x | parameters) * Gauss(param_i | 1, 0.1)

this means:
Constrain the parameter param_i (which is in the set of parameters of the main likelihood) to be close to 1 with a 1-sigma uncertainty of 0.1. That means that you are putting extra information into the fit, namely that sigScaler should be 1 +/- 0.1 with 68% confidence level.
If you need asymmetric errors, you have to replace the Gaussian PDF in the constraint term by an asymmetric one. Poisson for “counting parameters” or Log-Normal constraints for normalisations come to mind.

HI @StephanH

Thanks for the quick reply. It make sense and clarifies everything.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.