I’m trying to run
StandardHypoTestInvDemo.C on my own input. And it gets stuck at some point, I can’t seem to figure out why. The MWE is in this file: mwe.tgz (113.0 KB) . One needs to download it
wget https://root-forum.cern.ch/uploads/short-url/efiQbTzrwGhkf3NDEQYjuNpD6j3.tgz -O mwe.tgz
, untar it
tar -xzvf mwe.tgz
, and run
root -l -b -q 'StandardHypoTestInvDemo.C(1)'
In ROOT 6.14/04 it gets stuck right after the line
INFO:InputArguments -- Using a ToyMCSampler. Now configuring for Null.
It works, however, in ROOT 6.20/06, but the results are so bad I can’t make heads or tails of them.
Also, I have a different input file, which, when fed to the
StandardHypoTestInvDemo.C, produces pretty sane results even in 6.14/04. I can’t understand why one input works but the other does not. To test this second file (the one yielding good results) one just comments out line #275 of
const char * infile = "input/Output_combined_ABCD_model_1.root", // gets stuck
and uncomments the next one, #276:
const char * infile = "input/Output_combined_ABCD_model_2.root", // progresses nicely
Any help with this will be appreciated!
I will try to take a look but the MWE is not really “minimal”, it’s 1100+ LOC
Could you maybe trim that down to 100 lines or so?
Also, for me it’s hard to judge what qualifies as good results and what qualifies as bad results unfortunately.
I think @moneta is the original author of
StandardHypoTestInvDemo, he might have an idea.
thanks! Meanwhile I’ve figured out that if I nullify one input parameter of the model and change another to a ridiculous value, the job runs and produces sane results. It’s not clear to me why these should be nullified/changed (it’s only a technical solution; it does not make any sense to nullify/change them from the point of view of physics) and how to have them set to something I want still being able to run the job. It’s most likely the
hist2workspace problem actually, but its log is extremely long and I have not found the issue in it yet.
StandardHypoTestInvDemo.C inside the MWE tarball is pretty standard. I’m afraid I can’t trim it down any more.
What do you mean with this exactly ?
It is possible that setting some parameter to constant can help, or starting with different input values
I mean at first my job just stuck at some point as described above. Then I realized that if I nullify two of the input parameters (out of couple of dozens) of my model (yes, these two were already constants), the job runs but the results are insane: the observed limit on signal strength is ~0.2, while it should not go below 2.7. Finally after some trial and error I figured that if I inflate a third parameter by crazy 1300%, almost all limits get greater than 3. I was wondering what could cause this effect. What is it that I should look at in the log of
hist2workspace job (which produces the
Output_combined_ABCD_model.root file)? Or maybe what should I look for in the
Output_combined_ABCD_model.root itself to figure out what is wrong?
– I believe this is what I did when I nullified/changed some values, but I’m very reluctant to keep doing so as all these values come from a real analysis, and it does not make much sense to me to neglect whatever we got in the analysis and to replace it with whatever just makes the
hist2workspace or the
StandardHypoTestInvDemo.C (or both) happy.
It looks to me there are some issues in the data fitting of your model. I see errors from the luminosity constraints that often evaluates to zero. This is strange. Can you try adding a bounds on the parameter
Lumi to something like 10 times its error ?
I don’t get good results for both models that are in the tar file. And the reason is that the data profile likelihood values are way larger than what you get from the toys.
Trying using the asymptotic calculator shows well the effect. There you can compute a limit, but you see than both CL(s+b) and CL(b) are very small for each mu value. This shows that your data do not agree with both the S+B model and the B only model for all mu values. Something in the model is therefore not correct.
thanks for looking into this. I tried inflating the lumi uncertainty 10 fold, but the result is pretty much the same. I’ll try to figure out what’s wrong with my model then.
this is understood now. Indeed, I made several stupid mistakes when built the model. Now that I have fixed everything, the jobs run fine.
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.