Root 6.04.14 hadd 100Gb, TFileMerger::RecursiveRemove

Dear experts,

I am familiar with the issue of not being able to hadd files to make an output tree larger than 100Gb, but I just attempted to do so by accident and saw an error message that gave me hope there is a fix for this - pasted below. I wonder if this is in the pipeline, or should I give up these girlish dreams?

Thanks, Lily

Fill: Switching to new file: ./wjets_1.root
Fatal in <TFileMerger::RecursiveRemove>: Output file of the TFile Merger (targeting ./wjets.root) has been deleted (likely due to a TTree larger than 100Gb)
#0  0x00000037718ac68e in __libc_waitpid (pid=<value optimized out>, stat_loc=0x7fffffff82ac, options=0) at ../sysdeps/unix/sysv/linux/waitpid.c:32
#1  0x000000377183e609 in do_system (line=<value optimized out>) at ../sysdeps/posix/system.c:149
#2  0x00002aaaacedbc5a in TUnixSystem::StackTrace() () from /cvmfs/
#3  0x00002aaaace4bffa in DefaultErrorHandler(int, bool, char const*, char const*) () from /cvmfs/
#4  0x00002aaaace4ba72 in ErrorHandler () from /cvmfs/
#5  0x00002aaaace31754 in TObject::Fatal(char const*, char const*, ...) const () from /cvmfs/
#6  0x00002aaaace57be3 in THashList::RecursiveRemove(TObject*) () from /cvmfs/
#7  0x00002aaaace30e8a in TObject::~TObject() () from /cvmfs/
#8  0x00002aaaac4307f9 in TFile::~TFile () from /cvmfs/
#9  0x00002aaaab08ab8f in TTree::ChangeFile(TFile*) () from /cvmfs/
#10 0x00002aaaab091502 in TTree::CopyEntries(TTree*, long long, char const*) () from /cvmfs/
#11 0x00002aaaab090b4d in TTree::Merge(TCollection*, TFileMergeInfo*) () from /cvmfs/
#12 0x00002aaaac45d6f5 in TFileMerger::MergeRecursive(TDirectory*, TList*, int) () from /cvmfs/
#13 0x00002aaaac45c47a in TFileMerger::PartialMerge(int) () from /cvmfs/
#14 0x00000000004020b5 in main ()

1 Like


I would hope that hadd can handle this even more elegantly than it currently does - this looks basically like half a bug: half, because it basically knows what’s going on.

I suggested to @pcanal, the master of hadd, to take a look… He will give you an authoritative answer instead of random ramblings of mine.


1 Like


I have to look to see if we can more elegantly handle this in the hadd code.

In the meantime, you can lift the limitation by creating a rootlogon.C file containing (at least)

    TTree::SetMaxTreeSize( 1000000000000LL ); // 1 TB

Where you can put the limit to up to std::numeric_limits::max() - 1 (9223372036854775806 or 9000ish PB).


1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.