I perform a simple fit (using a function of the form: a_0 + a_1/x^2 + a_2/x^4) that mysteriously fails. To investigate it, I cout once the variables of the function and once the result of the fitting function. So I take something like this.
variables: good, logical numbers
result : good, logical numbers
(next iteration)
variables: good, logical numbers
result : good, logical numbers
Then sometimes this strange thing occurs:
variables: good, logical numbers
result : good, logical numbers
(next iteration)
variables: nan
result : nan
and the fit totally fails.
This is for me unexpected since the output of the result in the previous iteration is valid. State it in another way: I would understood an output like this:
variables: good, logical numbers
result : nan
(next iteration)
variables: nan
result : nan
This would imply that once the result went an overflow (or an underflow) so the input parameters for the next iteration of the fit are nan. In this case I would try to re-parametrize my function to avoid this effect.
But now I do not know how to handle this “backward” effect.
So:
a. does anyone knows why this happens?
does anyone have seen something like this?
b. Can you guide me how to trace the problematic point in my code?
I thing it should(?) be a problem of re-parametrization but I cannot know how to spot it.
I cut my code as much as possible. all are attached in the message.
you will find:
a. the make file
b. the macro.cc that does the fit
c. the fitting function lies inside func.cc (it’s huge, ugly but the correct one)
d. I have compiled it. myrun executes the macro
it seems that somehow the fit is very sensitive to the initial parameters. but very sensitive.
for example if I change par[5] from 0.16 to 0.2 the fit is Ok.
as you can see from my cout’s once the parameters (all together) become nan. then the fit crashes.
I could of course fine-tune the initial values of the parameters but since I perform a big number of fits this could not justify all the cases.
Hi,
it is not correct what you are saying. In the output from your example I start getting a nan from these values of the parameters:
FCN=1505.59 FROM MIGRAD STATUS=INITIATE 24 CALLS 25 TOTAL
EDM= unknown STRATEGY= 1 NO ERROR MATRIX
EXT PARAMETER CURRENT GUESS STEP FIRST
NO. NAME VALUE ERROR SIZE DERIVATIVE
1 p0 2.12200e+04 6.36600e+03 6.36600e+03 1.40894e-01
2 p1 3.07399e-02 9.22197e-03 9.22197e-03 -5.29178e+05
3 p2 9.86262e-03 2.95879e-03 2.95879e-03 -8.22054e+04
4 p3 3.40000e+01 1.02000e+01 1.02000e+01 5.26208e+01
5 p4 1.30000e+00 6.50000e-01 1.99571e-01 -3.16838e+02
6 p5 1.60000e-01 4.80000e-02 4.80000e-02 -1.50965e+04
NO ERROR MATRIX
Norm = 19822.8, Q = 0.0322298, s = 0.00999198, lambda = -38.6216, mu = 0.309496
theta = 2.40904, x = -0.0135855, omega = -2.31198, omega2 = 5.34527
* i = 1, xi = 1.70452
---
4
--- , U( 1.70452, 0.5, 5.34527 ) = 0.0343378
---
* i = 2, xi = 3.40904
---
4
--- , U( 3.40904, 0.5, 5.34527 ) = 0.000678873
---
* i = 3, xi = 5.11356
---
4
--- , U( 5.11356, 0.5, 5.34527 ) = 9.08182e-06
---
* i = 4, xi = 6.81808
---
4
--- , U( 6.81808, 0.5, 5.34527 ) = 8.95971e-08
---
* i = 5, xi = 8.52259
---
4
--- , U( 8.52259, 0.5, 5.34527 ) = 6.88007e-10
---
Result = nan
This explains why later also the parameters get nan values. You should check your function calculation and make it more robust numerically. For example instead of multiplying terms, sum their logs and do the exponent only at the end. You can use also the log gamma function (ROOT::Math::lgamma) instead of the gamma function
I revised my fitting function along the lines you suggested.
Indeed this numerical approach is so much superb in means I could not believe(!)
Nevertheless my original problem remains. The problem arises when a sudden flip in the parameter space occurs and lambda becomes from 34 (the initial value) -38.6216 (this is the output you showed me)
Norm = 21220
Q = 0.0307399, s = 0.00986262
lambda = 34, theta = 1.3
mu = 0.159909
x = 0.235166, omega = 14.111062, omega2 = 199.122
Result = 0.0081914
.
Norm = 19822.8
Q = 0.0322298, s = 0.00999198
lambda = -38.6216, theta = 2.40904
mu = 0.309496
x = -0.0135855, omega = -2.311984, omega2 = 5.34527
Result = nan
Notice that the jump is not continuous, as expected, but abrupt.
My Function is NOT meant to accommodate negative lambda so it is NORMAL that results NAN.
I could of course do a SetParLimits and restrict lambda to the positive range but I don’t like the possibility to add extra non-liniarities in my fit.
So, my question:
from your experience, do you suspect the cause of this sudden jump?
Needless to say, thank you very much for advising me,
Best regards
you can have large jump of parameters between iterations when you are far away from the Minimum. This is normal. If yu don’t want to put a limit on the lambda parameter for not being negative, then you can try to modify your function to return artificially a very large value parameter value. Basically you add a penalty term to penalize lambda negative with respect to lambda positive.