Using SMatrix for large matrices

Dear co-rooters,

I am trying to play a bit with SMatrices.
The reason is that I want to calculate covariance matrices.
In principle I should be able to do that for large matrices in the order of 5e4 x 5e4.

So far I have the following code (note that I am running root 5.34/36 on lxplus)

#include <iostream>
#include "TROOT.h"
#include "TGraph.h"
#include "TMath.h"
#include "Math/SMatrix.h"

using namespace std;

int linear_interpolation(){
// (1) Free-Up memory
	gROOT->Reset();
// (2) Define the size for the covariance matrices
	const int size_EVAL = 10;
	const int size_USER = 4;
// (3) Define the types of S-objects that will be used
	// SMatrices
	typedef ROOT::Math::SMatrix<float, size_EVAL, size_EVAL,ROOT::Math::MatRepSym<float, size_EVAL> >    		SMatrixSym_EVAL;
	typedef ROOT::Math::SMatrix<float, size_USER, size_USER,ROOT::Math::MatRepStd<float, size_USER> >               SMatrixStd_USER;
	typedef ROOT::Math::SMatrix<float, size_USER, size_EVAL,ROOT::Math::MatRepStd<float, size_USER, size_EVAL> >    SMatrixStd_USERxEVAL;
	// SVectors
	typedef ROOT::Math::SVector<float, size_EVAL*(size_EVAL+1)/2>                                 			SVector_COV;
// (4) Define and get the evaluated covariance matrix - symmetric matrix
	SVector_COV cov;
	
	int size_of_cov = sizeof(cov)/sizeof(cov[1]);
	for (int i=0; i<size_of_cov; ++i){ cov[i] = 0.02+pow(i,1.2)/1000; }
	
	SMatrixSym_EVAL 	V(cov);
// (5) Get the evaluated cross section and store it in a graph
	float En_EVAL[size_EVAL], xs_EVAL[size_EVAL];
	for (int i=0; i<size_EVAL; ++i){
		En_EVAL[i] = i+1;
		xs_EVAL[i] = cos(i);
	}
	TGraph *g_XS_EVAL = new TGraph(size_EVAL, En_EVAL, xs_EVAL);
	g_XS_EVAL->Draw("APL");
	
// (6) Calculate the interpolated cross section and it in a TGraph as well
	float En_USER[size_USER], xs_USER[size_USER];
	for (int i=0; i<size_USER; ++i){
		En_USER[i]  = 1.2*(i+1);
		xs_USER[i] = g_XS_EVAL->Eval(En_USER[i]);
	}
	TGraph *g_XS_USER = new TGraph(size_USER, En_USER, xs_USER);
	g_XS_USER->SetLineColor(kBlue); g_XS_USER->Draw("PLsame");

// (6) Define the sensitivity matrix and fill it
	SMatrixStd_USERxEVAL	G;
	
	for (int i=0; i<size_USER; ++i){
		for (int j=0; j<size_EVAL; ++j){
			G(i,j) = pow(0.02, i)*j+i/100;
		}
	}
// (7) Calculate the covariance matrix - will be symmetric
	SMatrixStd_USER V_USER;
	V_USER = G*V*Transpose(G);
	
// (8) Print the matrices for checking	
	std::cout << V << std::endl;
	std::cout << "**************************************************************************\n";
	std::cout << G << std::endl;
	std::cout << "**************************************************************************\n";
	std::cout << V_USER << std::endl;
	
	return 0;

}

I have two main issues to face here…

  1. The easiest one : To execute it, I first compile it using
    root[0] .L covariance_linear-interpolation.C++
    Then I execute it using
    root[1] linear_interpolation()
    If I change something in the code and try to re-compile it, I get
    Error: Function linear_interpolation() is not defined in current scope (tmpfile):1:
    Any idea why? I thought it is related to RAM issues, but I am using gROOT->Reset()

  2. The tricky one is the matrix size… I want to do these calculations for very large matrices (50000 x 50000). I tried to test it for size_EVAL=1000 and size_USER=100 but the code doesn’t even compile.
    https://pastebin.com/XhDMJmr7
    I have the feeling the problem here is memory, but I thought that the SMatrix package was build for large matrices.

Any idea on how to solve these?

Thanks in advance!

@moneta maybe you can help here?

Thank you very much for trying to help!
I was getting desperate!

Hi,

I am sorry but SMatrix has been built and optimised for small matrices, the maximum size I would say order of 10-100. The problem is the compilation, since it is base don expression templates with operations sometimes defined in terms of N-1. So for a matrix of size N you will compile N functions. This will not scale.

For large matrices you can use TMatrix or, if doing operations, directly BLAS functions, as we are doing in the
deep neural network implementation in ROOT

Lorenzo

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.