Hadd command multi processing reaches writing limits

Dear experts,

I was trying to hadd about 120 root files from command line, and each root file is about 200 MB. Each root file contains about 60 trees. I know that it will form a large file, about 25 GB after being hadded. The problem I ran into, is that when I use hadd -j and when threads >= 2, there is an error in TFIle::WriteBuffer for each thread like:
Error in TFile::WriteBuffer: error writing all requested bytes to file /tmp/partial0_f93d0298-8276-11ed-921e-56bde183beef.root, wrote 4688 of 9433

SysError in TFile::WriteBuffer: error writing to file /tmp/partial0_f93d0298-8276-11ed-921e-56bde183beef.root (-1) (No space left on device)

I can successfully hadd the files if I do not use the multi processing, or with j = 1. I can also successfully run the hadd with multi processing if I hadd 1/10 of the files. But it fails when I use any j >=2. I tested up to j = 40, which means 3 files are assigned to hadd in each thread.

I am curious that it looks like there is a limit of the size of the files that multi processing can handle, but I do not know how to check/debug it because I cannot find where this tmp/ directory is located, and I ran the command on fermilab’s cmslpc interactive nodes.

My goal is to speed up the root file merging process, and if you have other recommendations on how to complete this task, feel free to let me know as well.

Thank you,
Yao


Please read tips for efficient and successful posting and posting code

ROOT Version: 6.22/09
Platform: bash shell
Compiler: Not Provided


df -h /tmp

Filesystem Size Used Avail Use% Mounted on
/dev/vda2 40G 22G 16G 59% /

okay, it seems like I cannot hadd file more than 16 G. Is there a way to change the \tmp that the hadd command uses my local quota instead of \tmp?

Try: mkdir -p ${HOME}/tmp; export TMPDIR=${HOME}/tmp

Thank you, that works.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.