Error in <TBufferFile::WriteByteCount>: bytecount too large (more than 1073741822)

Dear experts,

I am trying to merge several large root files, using the following short script :

import ROOT
import os, sys

ROOT.TTree.SetMaxTreeSize(1000000000000)

rm = ROOT.TFileMerger(False)
rm.SetFastMethod(True)

input_file_list = sys.argv[2:]
for f in input_file_list:
print( ‘Adding {}’.format( f ) )
rm.AddFile( f )

output_file_name = sys.argv[1]
rm.OutputFile( output_file_name )
rm.Merge()

The sizes of the input files vary from 30 to 1 GB, and the final output file has a size of ~130 GB. The script seems to run, and a reasonable output file is produced, but it gives the following error twice:

Error in <TBufferFile::WriteByteCount>: bytecount too large (more than 1073741822)

In the past I used the same script to produce far larger files (up to ~1TB ) successfully, albeit with smaller input files (typically less than 1GB each). I am wondering what might be causing the error, and whether it can be safely ignored or not (the output file seems to work )? I presume it has something to do with the input file sizes, and the fact that something too large is being loaded into ram memory at some point, but I don’t understand why. Is there a way to solve or circumvent the error? Thanks for the help!

regards,

Willem


_ROOT Version: 6.12/07
_Platform: _ Not Provided
Compiler: Not Provided


1 Like

It seems an I/O issue. May be @pcanal can help you.

A small update: I also tried merging the file from smaller root files (several of these small files make up the larger files I was merging earlier), and ran in to the same error.

It is ‘plausible’ that the problem is that the TTree objects is reached the maximum size (1Gb of information for the meta data). To figure this out, find the maximum number of input files that do not lead to the error. Then look at the result of file->Map();, find the entry for our TTree and looks at the size. If it close to 1Gb then this is the problem, otherwise we have to look elsewhere.

If the TTree meta-data becomes very large it is likely because there is too many ‘baskets’, (in the file above all looks at the result of tree->Print();. If this is the case, it is ‘possible’ that reclustering might help. To recluster, you would need try rm.SetFastMethod(False).

I also wonder if using the command line hadd show the same problem.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.