Thread-safe TTree::Fill

Is there any chance of getting a thread-safe TTree::Fill and TTree::SetBranchAddress, perhaps one which can take a thread ID argument?

I currently have an object I use with RDataFrame’s Book function, which deals with this by creating a separate in-memory TTree per slot, then merges these before writing. I’d really rather not have these trees be in-memory since this leads to memory use linear in the size of the input, but don’t want to clutter the output file with all the unmerged trees either.

@pcanal ? It seems @Wile_E_Coyote edited my post to ping you, and I edited back thinking I’d done so accidentally.

Hi @beojan,
there is no chance to get a thread-safe TTree::Fill I’m afraid, the internals are just not designed for it. RNTuple, the “next TTree”, currently in the experimental phase, is thread-friendly (but also changes the TTree format slightly, so it’s not exactly a drop-in replacement).

Note that you don’t have to keep the thread-local TTrees in memory until the end. For example, RDF’s Snapshot uses TBufferMerger to write thread-local TTrees to the same on-disk TTree concurrently from multiple threads. What is your usecase and why can’t you use Snapshot to to write out a TTree from multiple threads?

P.S. @Wile_E_Coyote it is not good forum etiquette to edit other users’ posts adding content they might not have wanted to add :slight_smile:

What is your usecase and why can’t you use Snapshot to to write out a TTree from multiple threads?

I work with an internal format where I store classes for jets / an entire event, but I’m writing out a flat TTree. To do this, I have Book helper class that calculates each branch and fills the TTree. Essentially, a fancy combination of a whole bunch of Defines and a Snapshot (except it doesn’t persist the Defined columns. I also wanted to write into an already open TFile without taking ownership of it.

It looks like TBufferMerger is the right tool for the job. Why is it still in ROOT::Experimental by the way, now that RDataFrame is no longer experimental?

I think it was simply overlooked, I added it to my to-do list to take it out of the Experimental namespace for the next ROOT release. As you point out, if TBufferMerger was not ready for production, RDataFrame wouldn’t be either.

Out of curiosity, why doesn’t a whole bunch of Defines plus a Snapshot (of just a few selected columns) solve your usecase? As far as I can tell the only difference is that your custom action might run a bit faster (which might or might not warrant going through the trouble of writing a custom action helper depending on your need for speed).


I have 30-odd branches written to 3 to 7 separate trees.

With the custom helper (which isn’t that custom, it’s pretty generically written), I can initialize the helper with a vector-of-tuples of branch names and lambdas defining them. The helper calculates these columns as it fills the trees.

This way, my main function and the compute graph isn’t cluttered with 30 defines, and I can write the trees into a single open file (this was the main reason for this design choice, since Snapshot takes a filename and opens the file itself. Meanwhile, histograms are created with HistoNd and have to be written into a TFile manually.

With TBufferMerger, I’m having to change things a little because I can’t write into an open TFile or TDirectory anymore, but I can at least create the TBufferMerger beforehand, and use it to write my histograms as well.

OK, that’s not working. I’m getting a segfault when the TBufferMerger finally writes the output.

Here’s what I do:

  1. Create the TBufferMerger, and create the helper, passing a pointer to the TBufferMerger as a constructor argument
  2. In Initialize, I loop n_workers times, doing a GetFile(), creating a tree, resetting kMustCleanup, setting the branch addresses (I have one copy of the struct per thread), and saving the tree (unique_ptrs) and file (shared_ptrs from GetFile) into vectors.
  3. In Exec, the appropriate tree is filled, depending on the slot number.
  4. In Finalize, I call file->Write() on each file.

The segfault occurs when the TTree unique_ptrs are deleted, in std::default_delete<TTree>::operator(). Any idea what I’m doing wrong?

Alright, if you want all your output in the same output file then that might be a good reason to re-implement Define+Snapshot.

If the data written out is sane, then it’s probably just an ownership problem – my best guess: the files delete the TTrees they contain, then the unique_ptrs also delete the TTrees they contain. Assuming you have the debug symbols required, it should be easy to verify with valgrind --track-origins --suppressions=$ROOTSYS/etc/valgrind-root.supp ./yourprogram.
A possible fix: have the unique_ptr<TTree> go out of scope before the unique_ptr<File>, because if you destroy the TTrees first their destructors de-register them form the TFiles that contain them.


OK, that fixed the segfault, but it looks like TBufferMerger isn’t worth the trouble over keeping the trees in memory. Total memory use is now 50 GB on my big test (it was 67 GB keeping the trees in memory), but all the threads are now IO bound.

More importantly, the histograms and TParameters I write to the file from the main thread never get written out.

It seems like the TTrees are not what’s using memory. With TBufferMerger and the default autoflush/autosave every 30MB, each in-memory buffer should take around 30MB (@pcanal please correct me if I’m wrong).

Uhm that sounds like a bug in your application :confused:

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.

Yes, it should be only (roughly) 2x30 Mb (one for the uncompressed buffer, one for the compressed/on-file version).

@beojan did you find the cause of the memory use?

I didn’t figure out what the memory issue was, and because of the other issue with writing out the smaller objects (evidently just grabbing an extra file in the main thread and keeping that open to write these into didn’t work), I gave up and returned to what I was doing originally. By running over small chunks of data at a time (as is necessary on the grid anyway) I can keep memory use manageable.

This topic was automatically closed after 6 days. New replies are no longer allowed.