Hi @Lepton86,
The sigScaler
scales the amount of signal events (it’s multiplicative). When you use (... 1, 0, 2)
, the scale factor is initialised to 1, and allowed to vary within [0, 2]. The total signal then is
nSig_scaled = sigScaler * nsig
When initialised to 1, that’s obviously the model where no changes are happening. If the fitter decides that the sigScaler
has to be changed to a value higher or lower than 1, a systematic uncertainty starts to have an effect on the fit.
Here, the min and max values are “hard limits”, i.e. the variable is simply not allowed to go negative. These are not constraints.
Indeed, the “constraint PDF” is the one that implements the “systematic uncertainy”. The constant mean of 1 is not the error, the sigma of 0.1 is. If you have a likelihood like this:
L_constraint(x | parameters) = L(x | parameters) * Gauss(param_i | 1, 0.1)
this means:
Constrain the parameter param_i
(which is in the set of parameters of the main likelihood) to be close to 1 with a 1-sigma uncertainty of 0.1. That means that you are putting extra information into the fit, namely that sigScaler should be 1 +/- 0.1 with 68% confidence level.
If you need asymmetric errors, you have to replace the Gaussian PDF in the constraint term by an asymmetric one. Poisson for “counting parameters” or Log-Normal constraints for normalisations come to mind.