What is the best way to fit an unstable function?

By unstable, I mean the accuracy of function values changes from point to point caused by adaptive numerical integration.

I am trying to fit momentum spectrum with a function called Tsallis blast-wave. The evaluation of Tsallis blast wave involves triple integral. Below is a rough sketch of how I implemented Tsallis,


class integrant {
public:
    double operator(double *x, double *p) {
        // Tsallis involves triple integration
        // therefore, all 3 independent variables here are dummy
        double dummyVariable1 = x[0];
        double dummyVariable2 = x[1];
        double dummyVariable3 = x[2]; 
        double pT = p[0]; // momentum is the first parameter
 
        // perform some calculation according to Tsallis formula for integrant
        return result;
    };
};

class TsallisFunctor {
private:
    TF3 mFunc;
    integrant mIntegrant
public:
    TsallisFunctor() : mFunc("integrant", mIntegrant, /*parameter bounds*/) {};
    double operator(double *x, double *p) {
        double pT = x[0]; // x-value is momentum
        mFunc.SetParameter(0, pT);
        // set parameters for integrant
 
        // note: mFunc is a TF3. The integral is a triple integral
        return mFunc.Integral(/*bounds of the triple integral*/);
    };
};

int main() {
   // fetch histograms and create TF1 for Tsallis
   TsallisFunctor functor;
   TF1 tsallisTF1("tsallisTF1", functor, /* other arguments*/);
   hist -> Fit(&tsallisTF1); 

   // do what you need to with fit result
}

The problem I consistently run into is that the fit converges to a valid minimum (at least the chi2/DOF values look reasonable), but the error matrix is almost never positive definite. As a result, the fit status is reported as “failed” or “Not Pos Def,” and the uncertainties it produces do not make sense.

I believe this happens because of how mFunc.Integral() is evaluated inside TsallisFunctor. Since TF3::Integral uses adaptive numerical integration, the accuracy of the function evaluation changes slightly every time it is called. When the minimizer estimates uncertainties by calculating the Hessian near the minimum, these small variations in accuracy from one evaluation to another cause the error matrix to fail to be positively defined.

To test this idea, I replaced mFunc.Integral() with my own implementation of a triple nested Gauss–Legendre quadrature with fixed intervals. When I do this, the minimizer not only converges but also reports a “CONVERGED” status, and the uncertainties appear reasonable. The drawback, however, is that achieving accuracy comparable to TF3::Integral requires at least four times as much CPU time. Since the version with TF3::Integral already takes about three hours to fit a single spectrum, my Gauss–Legendre implementation becomes prohibitively slow.

What I would like to know is whether there is any way to make the uncertainty estimation work reliably with my original implementation using TF3::Integral. If that is not possible, is there a way to perform the minimization with TF3::Integral to find the best-fit parameters, and then switch to my Gauss–Legendre implementation solely for estimating the uncertainties?