Hadd file too large

Hello,
I am using root version 5.34/05 and python version 2.6.5.

I am using a python script to hadd root files together 10 at a time.
I use os.system("hadd -f file11 file1 file2 … file9 file10) to do the actual call to root.

each of the 10 files are of size ~670M, so I expect the final output file to be of size ~6.7G

I get the following error:

Error in <TFile::WriteBuffer>: error writing all requested bytes to file /nfs/dust/atlas/user/gherbert/TCAnalysis_nominal/histograms/mc12_8TeV.117050.PowhegPythia_P2011C_ttbar.merge.NTUP_COMMON.e1728_s1581_s1586_r3658_r3549_p1575.root, wrote 21294 of 29655
SysError in <TFile::WriteBuffer>: error writing to file /nfs/dust/atlas/user/gherbert/TCAnalysis_nominal/histograms/mc12_8TeV.117050.PowhegPythia_P2011C_ttbar.merge.NTUP_COMMON.e1728_s1581_s1586_r3658_r3549_p1575.root (-1) (File too large)
Warning in <TTree::CopyEntries>: The output TTree (mini) must be associated with a writeable file (/nfs/dust/atlas/user/gherbert/TCAnalysis_nominal/histograms/mc12_8TeV.117050.PowhegPythia_P2011C_ttbar.merge.NTUP_COMMON.e1728_s1581_s1586_r3658_r3549_p1575.root).
Warning in <TTree::CopyEntries>: The output TTree (mini) must be associated with a writeable file (/nfs/dust/atlas/user/gherbert/TCAnalysis_nominal/histograms/mc12_8TeV.117050.PowhegPythia_P2011C_ttbar.merge.NTUP_COMMON.e1728_s1581_s1586_r3658_r3549_p1575.root).
Warning in <TTree::CopyEntries>: The output TTree (mini) must be associated with a writeable file (/nfs/dust/atlas/user/gherbert/TCAnalysis_nominal/histograms/mc12_8TeV.117050.PowhegPythia_P2011C_ttbar.merge.NTUP_COMMON.e1728_s1581_s1586_r3658_r3549_p1575.root).
Warning in <TTree::CopyEntries>: The output TTree (mini) must be associated with a writeable file (/nfs/dust/atlas/user/gherbert/TCAnalysis_nominal/histograms/mc12_8TeV.117050.PowhegPythia_P2011C_ttbar.merge.NTUP_COMMON.e1728_s1581_s1586_r3658_r3549_p1575.root).
Warning in <TTree::CopyEntries>: The output TTree (mini) must be associated with a writeable file (/nfs/dust/atlas/user/gherbert/TCAnalysis_nominal/histograms/mc12_8TeV.117050.PowhegPythia_P2011C_ttbar.merge.NTUP_COMMON.e1728_s1581_s1586_r3658_r3549_p1575.root).
Error in <TFileMerger::Merge>: error during merge of your ROOT files

Am I actually hitting the maximum file size? I thought root files could be much larger.

Many Thanks,
Geoff

Hi Geoff,

Is it possible that you ran out of disk space or storage quota?

Cheers,
Philippe.

Hi Philippe,

There is certainly enough space in the place that the output file is being written to.
I am running the merging step as a batch job.
What I have just found is that if I run the hadd step locally (same root version), without using python, just in my login shell:

hadd -f file11.root file1.root file2.root .. file9.root file10.root

then it runs without any trouble and produces the expected 6.6G file.

This is a solution for me. It’s not a great one, and I don’t understand why it fails when I run it on the batch node.
This isn’t a python issue is it? does os.system have some behaviour I’m not expecting? ie. does it not wait for an os.system command to complete? if so, then it’s possible that my batch job is ending before this command runs through? or does it have a maximum buffer size issue?

Many Thanks for your help,

Geoff

Hello,
This is solved.
The batch system I was using had a maximum file size limit that was unknown to me.
I increased the max file limit size of the batch job and everything works fine.
For anyone running on NAF2.0: h_fsize is the option needed.
Many thanks,
Geoff

1 Like