Basket’s WriteBuffer failed

I experience a weird error running my preselection on the plain ntuples. I process a lot of events(1530700000) from which I select 44182896 events(I print “outputTree->GetEntries()” after the preselection is over).
During the saving of the root-file I get the error message “basket’s WriteBuffer failed” but the root file is still created and I am able to process it further in my analysis sequence. This is not terribly many events I assume because the output file that is stored afterwards is only 5 Gb large (or is this a limit driven by the cluster properties?).
However, I see that the output file contains only 42809640 Entries (1.3 mill events missing). What I think is happening is that some of the baskets are too large to be written to a file - I only don’t know what to do with it except of storing many trees instead of one but this might even not solve the problem.

Cheers, Olena

the error message I get when I call:



  Error in <TBranch::TBranch::WriteBasketImpl>: basket's WriteBuffer failed.

Can you share more of your code? From the fact that you are doing ‘cd’ the problem is more likely that some of the basket are attempted to be written to the input file (most likely because the TTree is created before the output TFile is created and thus the TTree is associated with the input file rather than the output file … i.e. you may need outputTree->SetDirectory(outputFile) just after createing the file)

1 Like

I check before the loop over the events that gDirectory->pwd(); and outputTree->GetDirectory()->GetName() are the same and correspond to the output file I am creating.

So to be more precise what I do is:

  1. create a chain:
    TChain* inputTree = new TChain(;
  2. loop over the files adding them to the chain:
  3. inputTree->LoadTree(0);
  4. creating the output file:
    TFile* outputFile = new TFile(, "RECREATE");
  5. creating output TTree:
    TTree* outputTree = new TTree(,;
  6. looping over events in inputTree:
    inputTree->GetEntry(iEntry); outputTree->Fill();
  delete outputFile;
  delete inputTree;

Fair enough. The other possibility would that a single entry is very large (more than 2Gb) but this is unlikely since you are copying the data from an existing TTree.

Is the problem reproducible or only happens some times? If reproducible can you try with a different output disk (to exclude the case of the physical disk being full or bad).

If it is reproducible but in not related to the disk, can you send us a way to reproduce it?


it’s reproducible but for then I assume I would have to send Gbs of root files… However, I could get around the issue by splitting the processing into portions of smaller size and then merging all the output files with hadd (pleasant bonus - this is faster).

But I don’t have a clear understanding still what was the actual problem and if it was root or cluster side

issue by splitting the processing into portions of smaller size and then merging all the output files with hadd (pleasant bonus - this is faster).

I am glad this solves your problem :slight_smile: This may also indicate that maybe some of the object put into the output files were not ‘reset’ correctly and were growing unexpectedly (i.e. because they contains data from multiple ‘event/entry’).


Dear Philippe,

do you know if there is a way to assure the proper ‘resetting’ for those objects? Which objects are we talking about in this case - baskets?

Cheers, Olena

Hi Olena,

No, I am talking about user objects (collections in particular) [And, of course :), this is just a guess based on the information we have so far, only being able to reproduce the problems would lead to a certain answer :slight_smile: ]


This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.