# Floating parameter for the scale of x

ROOT-6.30/04

Hi,
I have PDF model that differs from the data in units of x by an unknown scale factor. Is there any way to define a variable for the scaling of x, and fit the PDF to data to determine the value?

The code below demonstrates my data and my PDF. My goal is to smear the simulation first, then stretch it using a scaling factor (`scaled_x` ). I need to find this scaling factor, and the smearing parameters by fitting the adjusted simulation to the data. My question is: Can I evaluate the smeared histogram (`smeard_hist` ) using (`scaled_x`) instead of the original (‘x’) in the fit? If yes, how do I implement that?

``````'Model'
x = ROOT.RooRealVar('x','x',0,0,500)

mg = ROOT.RooRealVar("mg", "mg", 0)
sig = ROOT.RooRealVar("sg", "sg", 10,5,15)
gauss = ROOT.RooGaussian("gauss", "gauss", x, mg, sig)

datahist0 = ROOT.RooDataHist('datahist','datahist',ROOT.RooArgList(x),_hists[0])
datapdf0 = ROOT.RooHistPdf('datahist','datahist',ROOT.RooArgList(x),datahist0)
smeard_hist = ROOT.RooFFTConvPdf("test", "test", x, datapdf0, gauss)

'data'
scale = ROOT.RooRealVar("scale", "scale", 1.2,0.5,1.5)
scaled_x = ROOT.RooProduct('scaled_x','scaled_x',scale,x)
data = smeard_hist.generate({scaled_x},5000)
``````

Thanks.

Dear Ata,

I don’t fully understand what you are trying to achieve, bear with me. Are you after an extended likelihood fit perhaps?

Best,
D

Hi,
Yes, I’d like to do a likelihood fit. I asked a similar question in this post, and @moneta kindly provided some hints, but I’d appreciate more hints.

I have 1-dimensional data points in a range of (a-b)[a.u.] where [a.u.] indicates arbitrary units. I also have the expected distribution of the data points from simulations in units of energy (0-1000)[eV]. What I do know is that if I smear the simulation with a Gaussian, and transform the units from `f([a.b.])= constant.[eV]`, a section of the simulation PDF should fit the data.

My question is about the implementation of the unit transformation as a variable in the likelihood. Being able to stretch the simulation pdf from `[ev]` to `scale.[eV]` where `scale` is a variable in the likelihood would solve my issue. However, I’m not sure if that operation is possible with the available tools. Sorry if the previous explanation was misleading.

This small prototype code in Python gives the `_CDF` with the mentioned variables needed for a Binned Likelihood. It worked fine testing with IMinuit. I was wondering if there is a way to reimplement something similar in RooFit with the available tools. I already started working on a custom C++ code for this.

``````class feff_distribution(distribution):
def __init__(self,
eV_limits,
path = '../FEFF/core_hole.npz', #../FEFF/no_core_hole.npz
au_limits,**kwargs):

self.limits = au_limits
self.edges = np.linspace(*au_limits, 4000, dtype='float64')
self.binsize = self.edges[1] - self.edges[0]

self.eV_binsize = self.eV_centers[1]-self.eV_centers[0]

x,y = data['energy'], data['x_section']
return x,y

def smear_spectrum(self, eV_spectrum, eV_binsize, eV_resolution):
smeard_spectrum = ndimage.gaussian_filter(eV_spectrum,eV_resolution/eV_binsize)
return smeard_spectrum

def inverse_calibration(self, energy,strech):
of = energy*strech
return of

def _cdf(self,x, params):
eV_resolution, stretch, n_event = params
eV_smeard_spectrum = self.smear_spectrum(self.eV_y,self.eV_binsize,eV_resolution)

of_centers = self.inverse_calibration(self.eV_centers, stretch)
uncalibrated_spectrum = np.interp(self.edges,of_centers,eV_smeard_spectrum)

cdf = np.cumsum(uncalibrated_spectrum)
normalized_cdf = cdf*n_event/cdf[-1]
return np.interp(x,self.edges,normalized_cdf)
``````

Hi,