Problem filling large array using TTreeReader

Apologies if this is too basic a question, but I am confused about how ROOT works in the following case. I have an array,

vec vals[n_configs][n_par1]

here ‘vec’ is a struct I defined holding an array of 3 long doubles, and n_configs=10000, n_par1=8 are unsigned integers defined at compile time.

I wanted to fill this array with values from a TTreeReader, which worked absolutely fine. However, when I tried to add a third dimension to my array, i.e. filling

vec vals[n_configs][n_par1][n_par2],

where n_par2=5 is also known at compile time, then my code crashed with a segmentation error. After some experimentation it turned out that it ran fine when n_configs was small (e.g. 1000 or so), crashed about half the time when n_configs was around 3000, and crashed all the time when n_configs was more than 8000.

I have no idea what could be causing this behaviour; my only guess is that there is a problem with using TTreeReader to copy data into large arrays.

So is this the case? Is there a limit to the size/dimensions of an array into which data can be copied from a TTreeReader? And if so, is there any workaround?


Please read tips for efficient and successful posting and posting code

ROOT Version: 6.20/02
Platform: Linux (Ubuntu 20.04)
Compiler: g++ v9.3.0


Hi, and welcome to the forum!

That make several megabytes of memory allocated on the stack, which is limited in capacity. You should allocate on the heap, with the operator new

I will try this, thanks (although if there is also a way to use TTreeReader with large statically-declared arrays I would also be interested to see this).

However, the size of the array that was crashing would be 1000053*8 bytes =1.2Mb. This is quite large, but does not seem huge, and in creating the ROOT files I was working with several arrays of structs (on the stack) around 2-4Mb without any issue. So I was wondering if this is a problem specific to TTreeReader rather than just a general memory issue

Well, I might be wrong, but I count 10000*5*8*16 = 6400000 bytes - 6.4 MB (AFAIK, long double is 16 bytes on 64 bit platforms)

I don’t think so… But @axel might contradict me

ah, I missed a factor of 16 then :frowning: Dynamically allocating the arrays appears to work so far though, thanks!

1 Like

You’re welcome