Dear experts,
I was trying to hadd about 120 root files from command line, and each root file is about 200 MB. Each root file contains about 60 trees. I know that it will form a large file, about 25 GB after being hadded. The problem I ran into, is that when I use hadd -j and when threads >= 2, there is an error in TFIle::WriteBuffer for each thread like:
Error in TFile::WriteBuffer: error writing all requested bytes to file /tmp/partial0_f93d0298-8276-11ed-921e-56bde183beef.root, wrote 4688 of 9433
SysError in TFile::WriteBuffer: error writing to file /tmp/partial0_f93d0298-8276-11ed-921e-56bde183beef.root (-1) (No space left on device)
I can successfully hadd the files if I do not use the multi processing, or with j = 1. I can also successfully run the hadd with multi processing if I hadd 1/10 of the files. But it fails when I use any j >=2. I tested up to j = 40, which means 3 files are assigned to hadd in each thread.
I am curious that it looks like there is a limit of the size of the files that multi processing can handle, but I do not know how to check/debug it because I cannot find where this tmp/ directory is located, and I ran the command on fermilab’s cmslpc interactive nodes.
My goal is to speed up the root file merging process, and if you have other recommendations on how to complete this task, feel free to let me know as well.
Thank you,
Yao
Please read tips for efficient and successful posting and posting code
ROOT Version: 6.22/09
Platform: bash shell
Compiler: Not Provided