Hi,

I have probably a silly question, but I’d like to understand why Chebychev polynomials are better than ordinary ones for fitting (background shapes).

Best regards and thanks,

Klaus

Hi,

I have probably a silly question, but I’d like to understand why Chebychev polynomials are better than ordinary ones for fitting (background shapes).

Best regards and thanks,

Klaus

Contrast that with

Thanks for the links! I took a brief look and think, I understand mathematically why it is beneficial to use Chebychev nodes.

But I’m still wondering why this should have an actual impact on fitting finite datasets with statistical fluctuations. Following the Wikipedia-link to Runges phenomenon exhibits the statement:

“Another method is fitting a polynomial of lower degree using the method of least squares. […] least squares approximation is well conditioned.” (Runge's phenomenon - Wikipedia)

Isn’t this the use case for fitting experimental spectra?

HI @canigia,

I am also inviting @moneta and @jonas to the topic, as I am sure they can provide more information.

Cheers,

J.

Hi,

The advantage of using Chebychev polynomials is that they are orthogonal, and this when performing fitting means that the polynomial coefficients which are fit parameters are not correlated between each-other and this is helpful in the convergence and in the minimiser numerical calculations (e.g. inversion of Hessian matrix). In the degenerate case when some parameters are 100% correlated, there is no single solution to the minimisation problem.

See for example slide 22 of this training presentation (https://indico.desy.de/event/11244/contributions/4930/attachments/3446/3946/RooStats_Training_Part1.pdf)

Best regards

Lorenzo

Thanks a lot, now I got it!

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.