Home | News | Documentation | Download

Average value fluctuation with RooFit

Hi everyone,
I have a problem with the output values ​​from the fit. I have long searched for the ideal function to discuss the histogram and I found the sum of a Gaussian and a CB, with the same mean value, and a second degree Chebyshev polynomial for the tails.
To be more specific, the free parameters are: the mean value, the sigma of the Gaussian, the sigma of the CB, the “alpha” of the CB, the “n” of the CB, the fraction of the Gaussian on CB, the fraction of the sum of the two curves on the polynomial of CB and the two coefficients of the polynomial (for a total of 9 parameters).

Going to discuss the stability I noticed that, given the high correlation of the parameters, it was possible to fix some values: but on the basis of how these values ​​are fixed, with the same goodness of fit, the average value of the curve varies beyond 1 sigma.
Specifically, I fixed: the two coefficients of the polynomial, the “n” of the CB and the “alpha” of the CB.
Is there a way to minimize these fit-dependent fluctuations?

Thanks in advance

Hi @emanuele_cardinali!

It’s hard to make a diagnosis without seeing the fit and the full covariance matrix values.

What do you mean by “to fix some values”? Did you make a fit with all your 9 parameters floating, and then fixed some of the parameters to the best fit values? In this case, it is indeed surprising that the fit result for the mean is so different.

Could you please provide the numeric fit results for each fit (with covariance matrix if possible) and tell us to which values you fixed your parameters? If you don’t mind sharing it, seeing the plot would also help!

Cheers,
Jonas

1 Like

Hi @jonas
Thank’s for your reply.
To set the parameters I went to see the results they gave me when they were left free.
To check how stable the fit was, I went to vary the integration interval.
In both cases the coefficients of the polynomial are the same.
In the first case: for alpha = -1.4 and n = 1.4 → first.pdf (38.1 KB)

PARAMETER  CORRELATION COEFFICIENTS  
       NO.  GLOBAL      1      2      3      4      5
        1  0.70773   1.000 -0.492  0.143 -0.689 -0.361
        2  0.86887  -0.492  1.000 -0.281  0.697  0.718
        3  0.29128   0.143 -0.281  1.000 -0.201 -0.151
        4  0.83104  -0.689  0.697 -0.201  1.000  0.353
        5  0.76253  -0.361  0.718 -0.151  0.353  1.000

In the second one: alpha = -1.6 and n = 1 → second.pdf (38.0 KB)

PARAMETER  CORRELATION COEFFICIENTS  
       NO.  GLOBAL      1      2      3      4      5
        1  0.71706   1.000 -0.506  0.123 -0.707 -0.370
        2  0.87906  -0.506  1.000 -0.230  0.725  0.748
        3  0.24556   0.123 -0.230  1.000 -0.172 -0.117
        4  0.84313  -0.707  0.725 -0.172  1.000  0.426
        5  0.77668  -0.370  0.748 -0.117  0.426  1.000

As you can see in the first case the average value is 493.662 (4) while in the second 493.670 (4).

Hi @emanuele_cardinali,

thanks for the clarifications! So if I understand correctly, you determined alpha and n for different fit intervals and that’s how you got the values for the first and second plot?

thank looks like a very good fit actually! That you could get the same result for mean within statistical uncertainty for different values of alpha and n shows that the fit is very stable.

As far as I understand now, what you call “fit-dependent fluctuations” are essentially systematic uncertainties related to the choice of your fit interval. You have very small statistical uncertainties, so naturally it’s very challenging to get systematic uncertainties that are just as small. I think that they are in the same ballpark just “beyond 1 sigma” is already great.

In your case, I would leave it at that. If you’d tell me that your systematic uncertainty is smaller than 0.01 % I would not believe you anyway :smiley: (…okay maybe if you have a really good physics motivation for the choice of the fitting shape).

thanks @jonas for the answer (and for the optimism :sweat_smile:)
In fact, the fit is good, but the starting values ​​are just as good. The problem I’m asking is that, in the example I gave you, the choice of the values ​​of “n” and “alpha” cause the average value to deviate by 2 sigma. The doubt that came to me is that continuing to vary both “n” and “alpha” in the range suggested by the fit (when I left them free), this value deviates even more. How can I rate it?
As a beginner with data analysis, I would also like to ask you:
Assuming the value doesn’t vary more than that, which one should I take and with what error?

I see your concern! Before I comment more, can you please also share the covariance matrix and the results for the full fit where you left also alpha and n floating? If possible for the different fit intervals that you tried out. That would greatly help me formulating an answer.

Thank’s @jonas … Sure!
Leaving all the parameters free: proof0.pdf (38.2 KB)

PARAMETER  CORRELATION COEFFICIENTS  
       NO.  GLOBAL      1      2      3      4      5      6      7      8      9
        1  0.94913   1.000  0.564  0.410  0.323  0.633 -0.235  0.832  0.460  0.588
        2  0.93492   0.564  1.000  0.484  0.611  0.488  0.049  0.866  0.514  0.492
        3  0.86821   0.410  0.484  1.000 -0.215  0.547 -0.058  0.554  0.748  0.529
        4  0.93996   0.323  0.611 -0.215  1.000  0.059  0.134  0.523 -0.131  0.089
        5  0.97106   0.633  0.488  0.547  0.059  1.000 -0.173  0.733  0.862  0.930
        6  0.41849  -0.235  0.049 -0.058  0.134 -0.173  1.000 -0.051 -0.088 -0.105
        7  0.98873   0.832  0.866  0.554  0.523  0.733 -0.051  1.000  0.664  0.714
        8  0.96416   0.460  0.514  0.748 -0.131  0.862 -0.088  0.664  1.000  0.782
        9  0.94143   0.588  0.492  0.529  0.089  0.930 -0.105  0.714  0.782  1.000

In the narrowest range that I considered the fit was: NOT POS DEF → The fit parameters were too many!
I then set the Chebyshev coefficients (before setting alpha and n) and here are the results for the more extreme intervals I considered:

First one (the same as “proof0.pdf”) → proof1.pdf (38.1 KB)

PARAMETER  CORRELATION COEFFICIENTS  
       NO.  GLOBAL      1      2      3      4      5      6      7
        1  0.93696   1.000  0.151  0.408 -0.340  0.848  0.060  0.333
        2  0.75740   0.151  1.000 -0.103  0.078  0.282 -0.369 -0.066
        3  0.95070   0.408 -0.103  1.000 -0.197  0.656  0.783  0.883
        4  0.42104  -0.340  0.078 -0.197  1.000 -0.221 -0.068 -0.106
        5  0.95586   0.848  0.282  0.656 -0.221  1.000  0.356  0.585
        6  0.90584   0.060 -0.369  0.783 -0.068  0.356  1.000  0.634
        7  0.89995   0.333 -0.066  0.883 -0.106  0.585  0.634  1.000

Second one → proof2.pdf (30.1 KB)

PARAMETER  CORRELATION COEFFICIENTS  
       NO.  GLOBAL      1      2      3      4      5      6      7
        1  0.94428   1.000  0.047  0.743 -0.239  0.896  0.424  0.777
        2  0.85239   0.047  1.000 -0.148  0.160  0.155 -0.497 -0.071
        3  0.97358   0.743 -0.148  1.000 -0.275  0.844  0.807  0.925
        4  0.38303  -0.239  0.160 -0.275  1.000 -0.163 -0.205 -0.240
        5  0.96964   0.896  0.155  0.844 -0.163  1.000  0.576  0.841
        6  0.94583   0.424 -0.497  0.807 -0.205  0.576  1.000  0.647
        7  0.94707   0.777 -0.071  0.925 -0.240  0.841  0.647  1.000

As you can see, the interval of “alpha” and “n” is pretty wide and I, therefore, thought to evaluate if there was a dependence on the average value based on how I go to fix it.
And here we are … :sleepy:

Hi @emanuele_cardinali!

Your covariance matrices are very interesting! I’ll try to focus on your final question:

As a beginner with data analysis, I would also like to ask you:
Assuming the value doesn’t vary more than that, which one should I take and with what error?

So if I understand correctly, you want to measure your “mean” and quote an uncertainty, and are now unsure what to take (keep in mind what you do should depend on your goal: do you want to measure the mean parameter precisely, or do you care about a good fit for the whole range of x). Here is how I’d think about the problem of measuring the mean, maybe it’s interesting for you.

The full fit with the 9 parameters shows that the power-law tail on the right side of the crystal ball is difficult to estimate because the parameters are strongly correlated (0.83 in the cov matrix). That’s why they have these large errors. Another problem with the right tail is that its parameters are strongly correlated with the background parameters (Correlation between c0 and n is 0.866).

In this situation I would try a fit with the right tail excluded. The transition to the power law is at mean - sigma * alpha, which is rounded down 495. So I would fit only the region up to x = 495. Two less parameters to worry about, and systematic uncertainties from the shape you choose for the right tail are not relevant anymore. This is particularly good because alpha was correlated with the mean, so getting ignoring the right tail will hopefully reduce your uncertainty on the mean!

Besides noting that the right tail is problematic, I’m afraid I can’t help you more without knowing the physics behind the plot and your measurement goals. Nobody can tell you which results to take without knowing which components of the fit to trust, and that depends also on the physics. Some things to think about:

  • The crystal ball fit told you the power law tail is on the right. Is that what you expect from the physics? In a standard crystal ball, the tail is on the left because it’s the power law from the final state radiation (assuming x is energy or mass or something like that). Would there be a reason for it being on the right? If not, one more reason to be careful with the right tail.
  • Is there a motivation for the two sigmas? Are there actually two different resolution effects you can think about, or is it just a ad-hoc solution?
  • How did you decide on the Chebychev polynomials for the background? Is it maybe possible to fit far-away sidebands of x to constrain the background?

I hope this rambling helped a bit and gave you some ideas! By the way, I’m still thinking these are luxury problems because your mean is already so precise. I would probably just take the difference between the largest and smallest mean values obtained, divide it by two and write it off as the systematic uncertainty :sweat_smile:

Cheers,
Jonas

Hi @jonas
The solutions I found were “ad hoc”.
I had to complicate the fit function to include the tails. The choice of the CB, rather than a second Gaussian, unfortunately made the average value dependent on the fit interval (an evident dependence).
From what I understand, therefore, the advice would be to simplify the function as much as possible: for example by substituting a constant in place of the polynomial.
Unfortunately, the measurement I am conducting is precision and the final aim is to obtain, as far as possible, an error that does not exceed the fit statistic.
I still have a few days to find a solution; if not in case, I will try a compromise …

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.