I want to merge 30 ~15Gb files containing an TNtuple using hadd, but get the following error:
Fatal in <TFileMerger::RecursiveRemove>: Output file of the TFile Merger (targeting xxx.root) has been deleted (likely due to
a TTree larger than 100Gb)
Indeed, the target file is 100Gb when the error occurs. The files are generated from simulation data that writes an entry every break point.
I have 512Gb of RAM on my server, and I don’t want to slow down hadd by using more compression if I can help it. So my question is: Is there an easy way to get around the apparent 100Gb limit?
On the other hand, the ntuples are written with the default level one compression. Perhaps I could write the files with higher compression and use the same compression level in hadd (I see that using different compression levels slows hadd-ing quite a bit)?
[[[Edit: Just realized: I am using Geant4’s G4RootAnalysisManager (v9.6) in my simulation code. This class does not expose the compression level functionality, so it seems that I am stuck with writing the files using the default compression level.]]]
Thoughts and answers would be much appreciated.