Floating parameter for the scale of x

Hi,
Yes, I’d like to do a likelihood fit. I asked a similar question in this post, and @moneta kindly provided some hints, but I’d appreciate more hints.

I have 1-dimensional data points in a range of (a-b)[a.u.] where [a.u.] indicates arbitrary units. I also have the expected distribution of the data points from simulations in units of energy (0-1000)[eV]. What I do know is that if I smear the simulation with a Gaussian, and transform the units from f([a.b.])= constant.[eV], a section of the simulation PDF should fit the data.

My question is about the implementation of the unit transformation as a variable in the likelihood. Being able to stretch the simulation pdf from [ev] to scale.[eV] where scale is a variable in the likelihood would solve my issue. However, I’m not sure if that operation is possible with the available tools. Sorry if the previous explanation was misleading.

This small prototype code in Python gives the _CDF with the mentioned variables needed for a Binned Likelihood. It worked fine testing with IMinuit. I was wondering if there is a way to reimplement something similar in RooFit with the available tools. I already started working on a custom C++ code for this.

class feff_distribution(distribution):
    def __init__(self,
                 eV_limits,
                 path = '../FEFF/core_hole.npz', #../FEFF/no_core_hole.npz 
                 au_limits,**kwargs):
        
        self.limits = au_limits
        self.edges = np.linspace(*au_limits, 4000, dtype='float64')
        self.binsize = self.edges[1] - self.edges[0]
     

        self.eV_centers, self.eV_y = self.read_cross_section(eV_limits,path)
        self.eV_binsize = self.eV_centers[1]-self.eV_centers[0]    
        
        
    def read_cross_section(self,eV_limits,file_path):
        data = np.load(file_path)
        x,y = data['energy'], data['x_section']
        energy_mask = (eV_limits[0]<=x) & (x<=eV_limits[1])
        x = np.array(x,dtype='double')[energy_mask]
        y = np.array(y,dtype='double')[energy_mask]
        return x,y
    
    def smear_spectrum(self, eV_spectrum, eV_binsize, eV_resolution):        
        smeard_spectrum = ndimage.gaussian_filter(eV_spectrum,eV_resolution/eV_binsize)
        return smeard_spectrum
        
    def inverse_calibration(self, energy,strech):
        of = energy*strech
        return of

    def _cdf(self,x, params):
        eV_resolution, stretch, n_event = params
        eV_smeard_spectrum = self.smear_spectrum(self.eV_y,self.eV_binsize,eV_resolution)
        
        of_centers = self.inverse_calibration(self.eV_centers, stretch)    
        uncalibrated_spectrum = np.interp(self.edges,of_centers,eV_smeard_spectrum)
        
        cdf = np.cumsum(uncalibrated_spectrum)
        normalized_cdf = cdf*n_event/cdf[-1]
        return np.interp(x,self.edges,normalized_cdf)