Home | News | Documentation | Download

Optimizing Reading Arrays from TChain


Please read tips for efficient and successful posting and posting code

ROOT Version: 6.22/01
Platform: Not Provided
Compiler: g++

Hello,

I’m using a program to read events from many different root files each with their own TTree. Each of my trees has 7 or so branches, out of which 6 are of large s(30,000 elements) array. I need to randomly choose one of the entries each time, so I’m using the code below to do so. The problem is, this takes a significant amount of run time. It took about 11 minutes to run the code below about 250,000 iterations, and I need to eventually do this 10,000 times. In the overall scheme, I’ll end up reading each entry about 200 times, but optimizing that is difficult due to memory constraints since I would have to store each array. I’m wondering if there’s any simple way to optimize the reading of the entries. At the same time, it seems that the size of my trees also affects the run time, so I might need to optimize that tooo. I’m already using implicit multithreading. Thanks for the help.

‘’’
jentry = gRandom->Integer(nentries);
Long64_t ientry = LoadTree(jentry);
nb = fChain->GetEntry(jentry);
‘’’

Hi @yravan,
and welcome to the ROOT forum!

As this is a performance optimization problem, first of all you should check where time is spent exactly. To this purpose, you could use for instance perf to plot CPU flame graphs. Alternatively, as a first step you could also use gdb to perform a “poor man’s profiling”: just run the program within gdb, stop it with CTRL-C every once in a while and check the stacktrace with backtrace. If you do so 10 times and you notice the program is doing a certain thing 8 or 9 of those times, you have a good indication of where the program is spending time on.

With that said, TTrees are not designed for random access and it is very well possible that’s what’s causing the “slowness” – although 11 minutes to read 250k different entries is abysmal and it points to some other problem.

In any case, would it be possible to load some of the TTrees into memory, work with those, then unload them from memory and load the next batch of TTrees? It sounds like each TTree can fit in memory just fine and if you perform 250k random accesses in each TTree it might be worth it to pay the cost of loading all of it into memory upfront.

Cheers,
Enrico

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.