Re-scaling signal normalization in a RooWorkspace

I need to re-scale the signal histograms (nominal and systematic variations) in a RooWorkspace in order to change their normalization.

I saved the histograms and HistFactory Measurement object in a file, so I tried re-scaling the histograms in the file, then rerunning CollectHistograms and converting the Measurement to a new workspace. When I produce limits from this workspace, I get the same mu limits as before re-scaling, so this method doesn’t seem to work.

How did you try to rescale the histograms?

I grab each one, do a h->Scale(...) , write the file, then grab the Measurement, CollectHistograms, and make a new workspace.

That’s interesting. It sounds like there is some automatic scaling going on. Is there some option in the workflow you use that normalises everything?

I would need an example with one or two dummy histograms to see what’s going on.

I have SetNormalizeByTheory(false), which I believe disables normalizing everything.
I’ve shared the workspace and measurement file with you on CERNBox here: (id:257529).

Thanks for the files. What code are you running to read create the workspaces?

They’re created and read with some analysis-specific code built on HistFactory. I think the workspace should be usable with any code that works with RooWorkspaces though.

The code is here:
resolved-limits makes the workspace, run-limits is used to set limits.


Ok, two things:

  1. The code is not accessible, but that’s probably not necessary, because
  2. I wanted to ask for the code that creates the workspace and not for the one that creates the histograms. More like an example of what you do with the file. Please understand that I would have to write everything from scratch for every user with a problem if users didn’t include examples that we can run.

Would you also have an example of a file with the scaled histograms?

I just had an idea. In case you rescale the histograms in memory without actually writing a new file:
CollectHistograms reads the histograms directly from the file. It doesn’t matter what you do to the histograms in memory.

The scaling code is below. I do write the file before running CollectHistograms. I even close and reopen the file, in case the Measurement object had already loaded the historams. The meas_scalar_300.root file in that directory does contain the scaled scalar histograms.

#!/usr/bin/env python
from sys import argv
from import root_open, file
from rootpy.stats.histfactory import make_workspace
from rich import print

with root_open(argv[1], "update") as f:
    for year_dir in f:
        if not isinstance(year_dir, file.Directory):
        for d in year_dir:
            if not'scal'):
            for hist in d:

with root_open(argv[1]) as f:
    print('\n[blue]Making Workspace[/]')
    ws = make_workspace(f['Measurement'], silence=True)
    ws.writeToFile(argv[1].replace('meas', 'wkspace'))

Edited to undo a change I made while testing.

Well, is that the actual code you ran?
hist.Scale(1.0) doesn’t scale.

Ah, I see the edit… :slight_smile:

I don’t know what rootpy does, but have you verified that the histograms come out scaled? Maybe they need to be explicitly written?

Looks like that isn’t the problem (it also looks like the file I shared had the unscaled workspace). I re-ran the script, verified that the histograms are scaled (peak at around 20 in the new file vs 0.06 in the old), but the mu limit is still roughly the same as with the unscaled workspace.

Yes, indeed. I ran some test with scaling the histograms, and they get retrieved as you would expect it.

Is it roughly the same or exactly the same?
Does it change if you scale more?

I’m doing a coarse log scan to get quick results. To within that resolutuon, the result is the same.

Ok, problem found:

It’s super nasty, but a fix is almost ready.

It’s fixed. You can test starting from tomorrow with one of the nightlies:

Thanks. That should be available in LCG dev3, right (I believe dev3 is built from ROOT nightlies)?

Yes, it should be, but the nightlies need to complete without errors, and get installed into cvmfs.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.