multiplications in the exponents work better than ratios (kind of expected)
the initial idea of highlighting the ratios between the various “\tau”, i.e. always writing “\tau \cdot F” to refer one to another does not seem so decisive (cannot really draw conclusions here, too many different trials, but in general correlations seem to increase).
I could force (and sometimes helps) a parameter to be positive by writing it as \exp{P}, or to be larger than 1 by writing it as 1+\exp{P} and I am using this especially in the denominator where the two sigmoid functions can easily exchange role.
My questions are the following:
Clearly I have apply at least the strategy in the first bullet point to have a high success rate. How do I recover the errors in the original parametrization with TMinuit2? E.g. I have found the best \alpha with its errors and I know that \tau = 1/\alpha, can I ask TH1::Fit() with TMinuit2 to not change any parameter value but just to calculate “Hesse” errors with the initial values that I have set up?
Is the third bullet point a good idea? I know that Minuit can also bound parameters but with the exponential I can have a one side limit only. Is this discouraged and if so for which reason?
Thanks a lot in advance,
Matteo
P.S. I used this category because it seems more related to fitting even if I am not using RooFit for this but just the bare TH1::Fit(). Please correct me if this is wrong
Reparametrizing the parameter is certainly good because it can increase the stability, but if you are interested in the error it can become complicated. What you suggest will not work all the time, when computing errors you need to compute second derivatives and then inverting the matrix. This could cause a larger numerical error. You can call just the Hessian with the initial values, but you need to use the lower Fitter interface instead of TH1::Fit.
If using Minuit2 you can use one side limits only. It is used sort for the transformation. Again this is possible using the Fitter interface.
In general to improve stability is better to redefine the fit parameters to have their value and errors at around 1, so the covariance matrix is well defined and the numerical computations of Minuit for derivatives are more robust. If you can rescale the parameters this will be in general better that appling non-linear transformations.
thanks a lot.
Is the method CalculateHessErrors () that is only calculating the errors without taking further minimization steps?
So the steps would be:
Make the fit with the more robust parametrization
Arrange a new fit with the Fitter interface and the other parametrization, but do not call Fit()
Set the parameters according to the results of the previous fit, e.g. using the relation \tau = 1/\alpha . I do not need to fix these parameters neither to set initial errors.
Call CalculateHessErrors() and get the results via the method Result()
BTW The Hessian should be nothing else (or maybe the inverse, always confused) than the matrix of errors that I should get with the errors propagation from the statistics lectures, isn’t it?
E.g. if \quad\tau = 1/\alpha\qquad then \qquad\text{Var}[\tau] = \text{Var}[\alpha]\cdot(\frac{\partial \tau}{\partial \alpha})^2
and for simple 1 to 1 relation between parameters of the two parametrizations maybe the error calculation is as simple as this… or am I missing something?