Preserving Event Order with TBufferMerger and TTreeProcessorMT

I am currently playing around with writing code to parallelize my analysis of TTrees. Generally I start with a single TTree in a single file, and my goal is to do the following:

  • Loop over all events in the tree and extract data from the branches
  • Do some (perhaps complex and time-consuming) processing of the data
  • Write the results into a new TTree

I have been playing around with some of the very nice newer parallelization features in ROOT6 and would like to make use of these if possible. I have created some scripts that use TTreeProcessorMT and TBufferMerger to do this sort of thing in parallel. Here is a simplified example:

void ParallelProcess(TTree* t){
  int nthreads = 8;
  ROOT::Experimental::TBufferMerger merger("output.root");
  ROOT::TTreeProcessorMT ttp(*t);
  auto myFunction = [&](TTreeReader& reader) {
    TTreeReaderValue<double> rx (reader, "x"); // assume "x" is a branch in t
    auto f = merger.GetFile();
    TTree tout("tout","");
    double x2;
      x2 = pow(*rx,2);

This all works fine, however using TBufferMerger causes the order of events in the output TTree to be different from the input. This prevents me from easily correlating input and output parameters in future analysis, for example using TTree::AddFriend() and TTree::Draw(). I have also tried avoiding TBufferMerger and instead creating one thread-local TFile + TTree per thread, and then chaining them together afterwards. However, again the event order is not preserved in the TChain.

My main question is: is there any way to use TTreeProcessorMT (or a similar “new” ROOT6 parallelization feature) in such a way that preserved event ordering in the output TTree?

Of course, I could accomplish this by creating my own threads explicitly and manually divvying up the entries in the input tree between the threads. But if there’s a way to do this using the newer, implicit features I would be interested to hear about it.

As an aside, I have noticed that sometimes TTreeProcessorMT seems to use serial processing, while other times is processes in parallel. It seems to correlate with the size of the entries in the initial Tree. Is this a “feature”, i.e. is TTreeProcessorMT smart enough to figure out if parallel processing is beneficial or not? Or am I missing something?

ROOT Version: 6.12
Platform: Ubuntu 16.04 linux
Compiler: g++ 5.4.0

Hi Greg,
it’s not strictly TBufferMerger's fault that the output events are unordered: it’s one of the inherent problems of concurrent execution: TTreeProcessorMT splits your data over clusters of entries, respecting the bounds returned by TTree::GetClusterIterator (and this explains why for small enough input you will have only one chunk, for the one cluster in the input file, analyzed by one core); these clusters are processed concurrently, and any mechanism to preserve their order would require expensive synchronization between the working threads: each thread chugs along over its clusters and should not wait on other threads in order to peform the processing “in order”.

So your input data is processed in a scrambled manner because synchronizing the different worker threads to do things in order would require expensive waiting between the threads and destroy the benefits of parallelization.

On top of that TBufferMerger also has the exact same issue: each TBufferMergerFile returned by TBufferMerger::GetFile will write its buffers to the output file independently of the others, and the output might be scrambled even with respect to the scrambled way that TTreeProcessorMT processes the data.

I don’t know of an easy solution here – the simplest would be to have both old and new branches in your output TTree. RDataFrame uses TTreeProcessorMT and TBufferMerger under the hood and offers a handy syntax to do just that:

ROOT::RDataFrame df("tree", "files*.root");
df.Define("x2", "x*x").Snapshot("tout", "output.root");

Hope this clarifies things a bit!

That clears things up a lot, thank you!

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.