I’ve been recently spectating a fit property, which leads to extreme behavior.

I am fitting a distribution with Chebyshev polynomial + BW. Sometimes when I extend fit region in the undefined region (with 0 entries), where a fit curve trends to enter negative region, the following happens (see left tail of a fit curve):

I’ve been reading for some time about this effect and why it happens (EvalErrorWall) but I am still not sure whether it is a normal behavior and if there is a way to cure it.

From my experience of playing with this particular fit, I did manage to avoid it by narrowing the fit interval to avoid a region where the fit curve goes below 0. But this solution is inappropriate for me because at the next step of this study I am repeating this fit as a part of RooMCStudy. And there a single fit region for all samples is used. Which often appears to be too wide for some generated samples. And, obviously, it’s not possible to define an appropriate fit region for all 10k toyMC samples. IS there any way to ask a fit not to go crazy when it becomes negative? Or if it is a forbidden behavior, is there another way to deal with it?

Hi,
Sorry for the late reply. This is a difficult problem, but we have included some improvements in RooFit for dealing with these cases in one of t he latest version of RooFit. Are you using 6.26 or an older version ?
Also, a possibility to avoid using polynomial that becomes negative is to use positive defined polynomial such as the Bernstein polynomial. See the RooBernstein class.

Yes, I am using ROOT v6.26. To complete the puzzle and a status of my problem, I would like to mention two more things:

In the meantime, while I was looking for a solution, I’ve found the following thread very useful: Roofit upper limit unstable for negative pdf (with AsymptoticCalculator)
Here an author is experiencing a similar problem while doing a different study. I’ve gone through the suggested solutions, but none of them really worked. Neither assigning wiser ranges for floating parameters, nor the EvalErrorWall(false) flag for fitting could do a trick.

I also tried to get rid of the problematic fits by checking the MINUIT status. In particular, I required fitres.status()==0. I additionally tested fitres.covQual()==2 (although I could not find explanation of the possible values of this flag and corresponding meanings, I came up to the value 2 empirically). Eventually, status() did improve the situation a bit, now much fewer failed fits are passing through this criterion. But, as you can see from the randomly-chosen fits below, there are still cases with skyrocketing fits.