Hi Experts,
There are two approaches model1.C and model2.C to parameterize a dataset.
- model1: with 3 parameters: Signal1 counts, signal2 counts and bkg counts
- model2: with 2 parameters: fraction
fB = sig1/(sig1+sig2)and a fractionfSig=total_sig/(total_sig+bkg)
I prefer to use model1, because by using this, I can also get the results from model2 i.e fractions.
And both results should be the similar (Or maybe I am wrong
)
But when I run both of them on different datasets:
For dataset # 1 → templates.root
model1gives3xsmaller statistical error onfBthanmodel2and meanfBfrom both models also differs. In addition, errors onfSigare different…
For dataset # 2 → templatesDummy.root
- results are similar by both models.
I don’t understand such behavior. ![]()
Could someone please look into this?
You can find the dummy data and macros here (ready to run) in the cernbox Link
Thank you very much!