Home | News | Documentation | Download

General Least Squares with correlated data set

Hello Everyone,

I am trying to solve a linear equation

A*x= B

with A a (n,m) matrix, x and B are vectors of n-elements and a covariance matrix V ((n,n)matrix associated to B).
I found the following example to do so with uncorrelated error but not with an non diagonal covariance matrix V.

I believe this has already been solve using Minuit in :


but I am not sure to understand the answer…

Thank you for your time.

Regards,
Estelle
(ROOT Version: 6.14)

Please read tips for efficient and successful posting and posting code

_ROOT Version: 6.14
_Platform:Ubuntu 20
Compiler: Not Provided


So I manage to brute force it by solving the analytical solution

x = inv(A’ *inv(V)*A)*A’*inv(V)*B

inv(A) being the inverse of the matrix A
A’ being the transpose of the matrix A

But if someone has a better solution, I am happy to read it.

Regards,
Estelle

1 Like

Apologies for not replying at all - maybe @moneta has a recommendation here?

Hi,

Both the LinearFitter but also the standard Least square fitting in ROOT assume un-correlated errors. If you want to fit correlated data points, you would need to either use the direct solution as above in case of a linear problem, or write a general least square function with the data correlation matrix and then minimise using Minuit

Best regards

Lorenzo

Hi Estelle,
I see your “brute force” solution and want to draw your attention
to an efficient implementation in the matrix package.

Have a look at the non-member function

https://root.cern/doc/master/TDecompChol_8cxx.html#a6f8408a544cb8b5ac2a24db5840ab8ea

////////////////////////////////////////////////////////////////////////////////
/// Solve min {(A . x - b)^T W (A . x - b)} for vector x where
///   A : (m x n) matrix, m >= n
///   b : (m)     vector
///   x : (n)     vector
///   W : (m x m) weight matrix with W(i,j) = 1/std(i)^2  for i == j
///                                         = 0           for i != j

TVectorD NormalEqn(const TMatrixD &A,const TVectorD &b,const TVectorD &std)
{  
   if (!AreCompatible(b,std)) {
      ::Error("NormalEqn","vectors b and std are not compatible");
      return TVectorD();
   }
   
   TMatrixD mAw = A; 
   TVectorD mBw = b; 
   for (Int_t irow = 0; irow < A.GetNrows(); irow++) {
      TMatrixDRow(mAw,irow) *= 1/std(irow);
      mBw(irow) /= std(irow);
   }
   TDecompChol ch(TMatrixDSym(TMatrixDSym::kAtA,mAw));
   Bool_t ok;
   return ch.Solve(TMatrixD(TMatrixD::kTransposed,mAw)*mBw,ok);
}

Here a diagonal matrix is assumed but trivially replaced by a matrix.

The tutorial solveLinear.C might also give some inspiration:

https://root.cern/doc/master/solveLinear_8C.html
-Eddy

Hi Lorenzo,

Thank you for your answer. I found something similar but I don’t understand how I can minimize it using Minuit.

Best Regards

Hi Eddy,

Thank you for your answer.
I did look at the tutorial and at this function and I did try to replace the standard deviation by the full covariance matrix but the problem was not trivial for me as it involves a square-root of matrix .

Regards,
Estelle

Hi Estelle,

Going back to your original question of solving Ax=B with a covariance matrix V, you have to solve min{(Ax - b)^T W (Ax - b)} , where W = V^-1.

You can bring this back to the form of example min{(A’x - b’)^T (A’x - b’)}
by applying the following transformation:

A’ = U A and b’ = U b, where W = U^T U

W (the inverse of the covariance matrix) is a positive definite symmetric matrix, so U can
be obtained through a Cholesky decomposition on W:

TDecompChol ch(W);
U = ch.GetU();

-Eddy