Hello,
I’ve got eight data sets, each of them with a lot of files containing the same kind of tree. I would like to analyse them as one TChain, on which I execute the [color=#800000]TChain::Process[/color] method. The problem is, each of the eight data sets has to contribute to the final chain with different weight. I tried to do the following:
- create 8 chains, one for each data set, and fill them with [color=#800000]TChain::Add(“path to files with wildcard”)[/color];
- for each of them execute [color=#800000]chain[i]->SetWeight(wi,“global”)[/color];
- create a final chain - chainMix;
- add all chains one by one with [color=#800000]chainMix->Add(chain[i])[/color].
Now I have two problems with the [color=#800000]TChain::Add(TChain*)[/color] method:
- What it actually does is the same as I would use [color=#800000]chainMix->Add(“path to i-th data set files”)[/color]. It takes the files assigned to the argument chain and assigns them to chainMix. By this, I lose my weights given to chain[i].
- It forces the execution of [color=#800000]GetEntries()[/color] on the argument chain. In my case, counting all entries in the data sets takes more than an hour. I know the numbers and I would like to use them explicitly if needed, without a need to calculate them.
Is there any known workaround dealing with these problems? Can You suggest any solution?
Cheers,
rafson
PS
I went around this problem by adding output histograms, which I scale, instead of input chains. This solution has also some more advantages for me, which is not really relevant here. Still I think a solution to adding chains with weights might be useful for me or someone else here in the future.