Memory hoarding with RDFs

TL;DR

  • switch to a more recent ROOT version, it might make a difference
  • if that’s not enough, run each iteration of the loop in its own sub-process
  • you can also pre-compile a fully typed C++ function that does your Define/Filter/Snapshot calls (and then call that from Python) so that they don’t need to be just-in-time-compiled

Hi @MoAly98 , @vpadulan ,

let me add my two cents. As mentioned above the issue is likely due to the ROOT interpreter (cling) allocating memory every time some C++ code needs to be just-in-time-compiled and never releasing it (that’s a design decision in LLVM itself, not something that’s easy to fix at the level of cling or RDataFrame).

With that said, could you please check how things look like with v6.26.10 or v6.28.00? Although some level of memory hoarding will still be present, the situation might be better.

If that does not help, then the simplest solution is to just run each iteration of the loop over trees in its own sub-process, in order to sandbox its memory allocations (or, say, 10 iterations of the loop per sub-process in order to amortize the cost of process initialization/teardown).

Please also take a look at Memory leak when processing RDataFrames in python loop - #6 by eguiraud and the discussion there – among other things it shows another possible mitigation/workaround at the user level: pre-compiling functions that use fully typed Filter/Define/Snapshot calls (e.g. if you have a function like RNode ApplyFilter(RNode df) { return df.Filter([](int x) { return x < 10; }, {"myValue"}); } and then call ROOT.ApplyFilter(df) from Python, RDF does not have to just-in-time-compile anything. If you call directly df.Filter("x < 10"), then RDF has to just-in-time-compile the corresponding filter function).

I hope (at least some of) this helps!
Cheers,
Enrico