Hi,
I am doing an extended likelihood fit of a PDF to a 1D histogram (coding in python). The main parts of my code are:
d0Sigma = r.RooRealVar(“d0Sigma”,“Candidate minimum d0/sigma”,-20,20)
Get histogram
h = file.Get(‘Data’)
dataHist = r.RooDataHist( ‘data’, ‘’, r.RooArgList(d0Sigma), h )
Define PDF
finalShape = r.RooAddPdf(‘finalShape’, …)
Restrict range of fit
d0Sigma.setRange(‘fitRange’,-20,10)
Do fit
fr = finalShape.fitTo(dataHist, r.RooFit.Extended(), r.RooFit.Range(‘fitRange’), r.RooFit.Save())
Integrate PDF
argset = r.RooArgSet( d0Sigma )
integral = finalShape.createIntegral( argset, r.RooFit.NormSet( argset ), r.RooFit.Range(“fitRange”) )
Get uncertainty on PDF integral
integralError = totalIntegral.getPropagatedError( fr )
The integral is evaluated with the PDF normalised to unity in the range (-20,20). However, I actually want it to be normalized to the number of entries in the histogram. Is there any pretty way to do this ? The only solution I can find is to add the code:
Get number of entries in histogram
numData = dataHist.sum(False)
Scale integral by this number
scaledIntegral = numData * integral
To get uncertainty on scaled integral, add in quadrature Poission uncertainty on numData to uncertainty on unscaled integral.
numDataError = math.sqrt(numData)
scaledIntegralError = math.sqrt( (numData * integralError)**2 + (integral * numDataError)**2 )
Which is rather ugly.
Thanks,
Ian Tomalin