If I created an NLL object with pdf.createNLL() and used the NumCPU() option, is there a way to see this information after the fact?
For example, I want to load a saved NLL from a root file, and I want to know if I specified to use 1 core or 2. I know this information exists because it’s built into the NLL, but I cant find a way to access it. I tried using “nCPU” but that didn’t work.
My second question is if there is a way to see it, is there a way to change it?
No, there is no way to see this information after the fact.
By the way, it’s not a good idea to save NLLs to a ROOT file. While it was technically possible (more by accident because nothing prevented it), it was never the intended workflow and it’s quite unreliable. What one should do to save models in RooFit is to import both the pdf and the dataset in a RooWorkspace, and then in your analysis script you create the NLL on the fly using RooAbsPdf::createNLL(). Is there a reason why this standard workflow is not possible for you? Starting with ROOT 6.28, we explicitly disallow the writing of the RooNLLVar to ROOT files.
As you might have guessed by now, there is also no way to change the
NumCPU() after the fact, but this should not be a problem if you have to re-create the NLL on the fly anyway because saving it to a ROOT file is not supported.
Let me know if you have any followup questions or comments!
The workflow you described is not impossible for me, and it was what I was doing until recently. I basically broke up what you describe into a part where I specify a saved model and a data set and it saves an nll. then when I’m ready, I open the nll and fit it. I use pyroot, and the memory is often kind of a mystery, so doing it this way helped to ensure I wasnt carrying around a bunch of superfluous stuff.
If it is that advised against, I can switch back. Can I ask why it is unreliable?
Ah, I see where you’re coming from then!
What people usually to to make sure they are not carrying around a bunch of superfluous stuff is to import the model (
RooAbsPdf) and the datasets (
RooAbsData) in a RooWorkspace, save the workspace to a file and read it back in a fresh script. Is that also okay?
I can’t tell you why saving the RooNLLVar is not reliable, there are probably many reasons for why it doesn’t work. If you save it and read it back you don’t even get the same fit result, as I documented in this PR description with a reproducer code. But we decided that it would be a waste of time to fix this accidentally supported workflow that was never planned in RooFit. Another reason for it was that each RooNLLVar owns a private clone of the dataset, so if we would encourage users to save RooNLLVars to workspaces, the memory footprint of the file risks to be unreasonable.
I was saving the nll, the pdf, and the data used to generate the pdf all in workspace saved to file, and unloading it accordingly. It’s very easy to change to just saving the data and pdf, and generating the nll at the time I would normally load it in. If the data is very large, I can delete it after creating the nll.
Thanks for the help!
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.