Root and large data sets

I am very new to Root and I am coming mostly from R or matlab. I find that both are very slow when dealing with very large data sets. I understand that root (probably in part due to C++) is very good at dealing with large data sets (100MB to 1GB). Is that true? For instance can I “load” one very large matrix and do computations (say inversion) without having enough physical memory? That is it will use HD when needed without me having to think about it?
thanks!

HI,

yes this is true ROOT can analyze very large data sets very efficiently. However, for what concerns matrix computation, we require that the matrix fits in memory.
We have sparse matrix classes which allow to optimize the memory allocation in case of sparse matrices.

Best Regards

Lorenzo

Thank you very much Lorenzo

I think I picked a bad example (matrix inversion). Maybe a good example would be to do some average on a large matrix (say add all entries and divide by the total number of entries). I am mostly concern with the data “loading”. In R or matlab I would have memory problems if I want to read all the data in memory. The solution then is to construct a way of braking the data and doing a computation on parts of the data which I can load. I wander if with root this is avoidable at least sometimes. I realize that there are no easy solutions in the case of some particular operations like matrix inversion. I realize that this might be a basic question … thank you!

ROOT Trees are designed to loop on large quantity of data on disk and brought in memory by chunks.
See Users Guide for more details and examples in the tutorials

Rene

Thank you!