This is exactly what you want!!
-
Let’s say that you have a parameter that you don’t know anything about. You let it float completely freely (no constraints), and it will have maximal impact on your errors (it will make them bigger).
This is a fit with unconstrained (super pessimistic) systematic uncertainties. -
Now, you realise that you actually know something about the parameter, namely that it should be centred around zero with a sigma of 0.5. This knowledge could e.g. come from a paper where somebody measured this parameter.
Now that you know this, you can add this knowledge to your likelihood model by multiplying withGauss(parameter | 0, 0.5)
. Now, your model becomes more powerful, because it can use the extra knowledge that you just provided to reduce the uncertainties that are caused by this parameter. The likelihood parabola gets more narrow, your uncertainties fall.
Multiplying with a Gaussian like this is equivalent to saying: “I know that this parameter must be at 0 +/- 0.5 with 68% confidence.”
You do a fit with a constrained systematic uncertainty. -
What’s missing is the data-only uncertainty. For this, fit the parameters, and set all to constant. Then plot the NLL.
Now, the only source of uncertainty in your model is due to data statistics because all the systematics-related parameters have been “disabled”.
Your plot looks good, but you should label the curves “Without external constraint”. So red would be the maximal systematic uncertainty, blue would be a constrained systematic uncertainty, and data uncertainty would be something like
par1.setConstant()
par2.setConstant()
...
...
nll.plotOn(nll_frame, ROOT.RooFit.ShiftToZero(), ROOT.RooFit.Name('stat only'), ROOT.RooFit.LineColor(ROOT.kGreen))