I/O optimization

Hi,

to perform my analysis on several file with a TTree I used to work with a TChain :

vector<TString> fileList;
// ...
// ...

TChain * tc = new TChain("tree");

for(int f=0; f<fileList.size();f++){
tc->Add(fileList.at(f));
}

myClass* l = new myClass(tc);

l->Loop()

I noticed that when the number of files was large (~20000) and the number of events per files was small (~300) it was more efficient to run separately on the files instead of creating a TChain:

for(int f=0; f<fileList.size();f++){

  TFile *f = TFile::Open( fileList.at(f) );
   TTree *tc = (TTree *)f->Get("tree");

   myClass* l = new myClass(tc);

   l->Loop()

  delete l;
  f->close();

}

myClass is produced via MakeClass and Loop is dominated by GetEntry()
This second solution is the good one for me, but I was wondering why using a TChain is so slow and if there was a way to optimize the TChain.
I am sure the recommended solution is to merge the 20k files into fewer files but it is not always possible
(1 TB of data it is still not easy to handle on local machines.)

cheers,
delo