Storing really big trees, Automatic File splitting?

Hello,

I encounter a problem when modifying entries in my tree that I load via TChain. My data is quite big (10GB) but I have to work on a vfat32 harddisk, so I’m forced to keep the files below 4GB or somthing like this.
The initial tree was produced on a farm so it is split into several files by definition. To modify an entry in it, I have to copy the tree while modifying the values. For this I open a temp.root to have a container for the tree. The file will obviously become too large when creating this tree with all the data I have.
So long description, short question:

Is there a way to ask root to split a file into several files when becoming too large or do I have to check the size by my self and split it by my self?

Regards Promme

Hi,

see root.cern.ch/root/html/TTree#TTr … axTreeSize

Cheers, Axel.

So it closes the file and opens a new one… hm. Does this impose that the TFile knows how to handle multiple volumes? I can write simply write my tree without caring about the number of Files it is buffered to and the tree can be read afterwards without any file handling?

Hi,

Please read the relevant documentation at root.cern.ch/root/html/TTree.htm … ChangeFile

[quote]Does this impose that the TFile knows how to handle multiple volumes? [/quote]TFile does not but TTree does handle properly the switch from one file to the other.

[quote]I can write simply write my tree without caring about the number of Files[/quote]Yes.

[quote]and the tree can be read afterwards without any file handling?[/quote]Yes, but to access the files you will need to use a TChain to which you add (via a wildcard for example) all the produced files.

Cheers,
Philippe.

perfect. Thanx!

Most probably I will be faster trying it out, but what are the units of SetMaxTreeSize() ? I tried 1000 assuming byte for ~1MB -> Files are 74 KB big. 1000*8 assuming Bits -> still 74 KB big. Let’s see weather the reply is faster than my retry :wink:.

Hi,

The unit are in bytes. There is 1 millions bytes in 1MB so you should pass 1 000 000 for ~1MB.

Cheers,
Philippe.

Dammed, I’m stupid. Thanx