Next event size in a TTree

Dear Rooters,
I’m facing some memory problem due to some really big events which came out from our simulation.
After a quite large production we realized that some of this simulated events required ~1Gb of RAM just to be read from file. This leaves little memory left for the analysis program itself. When one of these events is read, the job is regularly killed.
It turns out, from the event size returned by TTree::GetEntry() that only half of the memory is taken by the event.
I suppose the rest is used in the reading operations (baskets? buffers?). I’ve also seen that the memory is released when the tree is deleted (and the file is closed?).
Now is too late to re-organize the branches to reduce the memory usage, so I was wondering if there is a way to know
in advance which is the size of the next event to be read to decide whether to skip it or not.
Or if there is a way to force the tree to drop some meory after the event has been read. I tried with TTree::DropBaskets(), but it crashes during the next call of TTree::GetEntry().
Thanks in advance for any suggestion,

looks like you are creating a memory resident Tree !!!
When creating the Tree, do
TFile f("…
TTree T(…

and not

TTree T(…
TFile f(…


Sorry for the double post. I had trobles to send to the mailing list few weeks ago.

That’s what I do.
TFile* f = new TFile(…);
TTree* t = new TTree(…);

Is there a way to check whether the tree saved into the rootfile is memory resident or not?

The directory associated to the tree/branches seem to be fine:
root [3] etree->GetDirectory()->GetName()
(const char* 0xa09877c)
root [4] etree->FindBranch(“header”)->GetDirectory()->GetName()
(const char* 0xa09877c)


Send the output of t->Print() as an attachement


Hi Rene,
Please find attached the TTree::Print() dump.

I’ve also made the following test using top:

root [] TFile f = new TFile("/euso/production/feb2006/Clear06_4X/Clear06_4X_R000_S1000/Clear06_4X_R000_S1000.root");
root [] EEvent e=new EEvent;
root [] TTree
etree = (TTree
root [] etree->GetListOfBranches()->ls();
OBJ: TBranchElement header header : 0 at: 0xa46b030
OBJ: TBranchElement truth truth : 0 at: 0xa4e79e0
OBJ: TBranchElement geometry geometry : 0 at: 0xa5ece18
OBJ: TBranchElement shower shower : 0 at: 0xa698058
OBJ: TBranchElement detector detector : 0 at: 0xa8371c8
root [] e->SetBranches(etree); // eevent has several containers inside and performs the correct SetBranchAddress.
mem: 5639 thea 16 0 61160 59M 21240 S 0.0 2.9 0:03 1 root.exe

root [] etree->GetEntry(151); // one of the big entries
mem: 5639 thea 24 0 805M 805M 21268 S 0.0 39.9 0:12 1 root.exe

root [] delete f;
mem: 5639 thea 15 0 529M 529M 21280 S 1.2 26.2 0:12 1 root.exe

root [] delete e;
mem: 5639 thea 17 0 462M 462M 21292 S 0.0 22.9 0:13 1 root.exe

If I recreate the event and loop on other, small entries, the mem doesn’t shrink any more.

I’d like to add that there are no evident memory leaks. The framework that
reads the events has been tested on a large number of smaller events and the
memory seems to be reasonably under control. Also checking on with MemStat
turned on and gObjectTable, the deletion of TObjects is done properly.

I’m really running out of ideas to understand how the management of the memory works.
etreePrint.txt (65.9 KB)

Ups, I forgot to include the top fields list. Sorry

mem: 5639 thea  24   0   805M  805M 21268  S   0.0  39.9   0:12  1   root.exe 


Your Tree structure should not use more than 30 Mbytes for one event.
It seems that somewhere you have a leak in your classes.
Could you measure the memory used after each of the following steps;

  • start interactive ROOT
  • TFile f(“myfile.root”);
    -TTree T = (TTree)f.get(“Mytreename”);
    -T->GetEntries(1); etc


I have attached 4 files. The first, memStat.dump.gz
contains the printout of the memory for 500 events which contains several big events. The RSS memory occupation has been dumped using the ‘ps’ command to get the RSS memory of root.exe during its execution.
The commands and the number of bytes returned by GetEntry are also reported.

The 3 graphs in the tgz file are

  1. cMemStat: event loop
    Then I did a couple of further test.
  2. cMemStat_FC_10: event loop with TClonesArrays forced to be shrinked to a default size every 10 events read from file.
  3. cMemStat_FC_1: event loop with TClonesArrays forced to be shrinked to a default size every event read from file.

In the last run there is a baseline of ~100 Mbytes, which is very reasonable. What puzzles me the most is that the increase of memory is twice the number reported by GetEntry…

cMemStat.tar.gz (6.88 KB)
memStat500.dump.gz (8.69 KB)