got this for several data sets, was able to reproduce with this sample data:

```
import ROOT
import numpy
y=numpy.array([5,3,1.0])
xerr=numpy.array([1,1,1.0])
x=numpy.array([1,2,3.0])
yerr=numpy.array([0.1,0.2,0.7])
a=ROOT.TGraphErrors(len(x),x,y,xerr,yerr)
a.Fit("pol1")`
```

OUTPUT:

```
FCN=2.00003 FROM MIGRAD STATUS=CONVERGED 245 CALLS 246 TOTAL
EDM=1.28797e-009 STRATEGY= 1 ERROR MATRIX UNCERTAINTY 25.5 per cent
EXT PARAMETER STEP FIRST
NO. NAME VALUE ERROR SIZE DERIVATIVE
1 p0 -5.00789e+005 1.66517e+007 -4.29583e+005 -1.79607e-009
2 p1 2.50387e+005 8.32551e+006 2.14780e+005 -3.71990e-009
<ROOT.TFitResultPtr object at 0x0D22F2B8>
```

as you can see, the fit parameters are BS… giving initial guesses does nothing.

*ROOT Version:* 5.34.38, 5.34.30, 5.34.22, 5.30.06, all 32bit versions, with python 2.7.18+numpy

*Platform:* windows

@assosiation ,

Happy to see that you are enjoying yourself with ROOT.

Below a script based on this.

```
{
auto c1 = new TCanvas("c1","A Simple Graph with error bars",200,10,700,500);
c1->SetFillColor(42);
c1->SetGrid();
c1->GetFrame()->SetFillColor(21);
c1->GetFrame()->SetBorderSize(12);
const Int_t n = 3;
Double_t x[n] = {-0.22, 0.05, 0.25};
//Double_t y[n] = {1,2.9,5.6};
Double_t y[n] = {-1,-2.9,-5.6};
Double_t ex[n] = {.05,.1,.07};
Double_t ey[n] = {.8,.7,.6};
auto gr = new TGraphErrors(n,x,y,ex,ey);
gr->SetTitle("TGraphErrors Example");
gr->SetMarkerColor(4);
gr->SetMarkerStyle(21);
gr->Fit("pol1");
gr->Draw("AP");
}
```

By flipping the sign on y, you can have your positive and negative slope. Either way, smooth sailing

But making the x-errors larger than the y-errors gives an error matrix that is not positive

definite. Realize that although a pol1, the moment x-errors are supplied the fit becomes

non-linear (and start depending on starting values).

Let’s see if in case of number of degrees of freedom = 1 (3 data points, two fit paramters)

something can ne done.

1 Like

Thank you for the clarification about the error matrix, didn’t think about that! For the NSF of 1, I only created the example code like that, noticed the problem on a reasonable data set, tried to reproduce with mage up numbers. Well, will think of the ways to solve this by making slope positive or flipping the access.

Been using root for a while now but every time I find something like this feel myself a newbie again, bwahaha

As I wrote earlier, the moment you introduce errors in x, a fit becomes always

non-linear and starting values matter. The error in a data point is calculated

as ey_new^2 = ey^2 + (dy/dx * ex)^2 and dy/dx will depend on the values of

the function parameter.

I can not find anymore after the generalization of the fitting code where this

happens, @moneta will know.

Anyhow, your problem will be solved by doing first a fit without errors in x, and using

this fit result as input into the final fit with errors in x:

```
gr->Fit("pol1","EX0");
gr->Fit("pol1");
```

1 Like