File open limit?

Hello all,

I am running into some kind of a limit on the number of files that can be opened via TFile at a time. Here is a small example of what I mean:

  1. create a lot of Root files
#!/usr/bin/perl

$file="length_cut_1.root";

for $i(0..300) {
system "cp $file ana\_$i.root";
}
  1. Try to open them all at the same time
void testChain()
{
  const int NFILE = 300;
  TFile **farray=new TFile*[NFILE];
  
  char itemp[100];
  for(int i=0;i<NFILE;i++) {
    
    sprintf(itemp,"ana_%i.root",i);
    if(i%20==0)cout << "  opening file: " << itemp << endl;
    //cout<<"The file is "<<itemp<<endl;
  
    farray[i]=new TFile(itemp);
 
  }
}
  1. However, I get an error after the 220th file or so. Here is the actual message:
  chaining run with source data: ana_200.root
  chaining run with source data: ana_220.root
SysError in <TFile::TFile>: file ana_221.root can not be opened for reading (Too many open files)
SysError in <TFile::TFile>: file ana_222.root can not be opened for reading (Too many open files)
SysError in <TFile::TFile>: file ana_223.root can not be opened for reading (Too many open files)
SysError in <TFile::TFile>: file ana_224.root can not be opened for reading (Too many open files)
SysError in <TFile::TFile>: file ana_225.root can not be opened for reading (Too many open files)
SysError in <TFile::TFile>: file ana_226.root can not be opened for reading (Too many open files)
SysError in <TFile::TFile>: file ana_227.root can not be opened for reading (Too many open files)
SysError in <TFile::TFile>: file ana_228.root can not be opened for reading (Too many open files)
SysError in <TFile::TFile>: file ana_229.root can not be opened for reading (Too many open files)
SysError in <TFile::TFile>: file ana_230.root can not be opened for reading (Too many open files)
SysError in <TFile::TFile>: file ana_231.root can not be opened for reading (Too many open files)

etc... 
  1. All I need is to have ~450 files open at the same time. Is there a switch I could turn on/off to achieve the desired affect. Here is the ouput for:
sysctl -a | grep files
kern.exec: unknown type returned
kern.maxfiles = 65536
kern.maxfilesperproc = 30720

Many thanks in advance
erdos

OSX 10.4.11
Root 5.18 (tried on 5.17.x, 5.16.x)
macosx64

Hi,

try this:

ulimit -n 2048

see “man bash”

I’ve in my .bashrc on MacOS X:

if [ uname -s = “Darwin” ]; then

set to Linux limits

ulimit -d unlimited
ulimit -n 2048
fi

Cheers, Fons.

Hi Fons,

I am running on a cluster and I don’t have permissions to change the limit, is there an alternative solution?

$ ulimit -n 2048 -ksh: ulimit: 2048: limit exceeded [Operation not permitted]

I should note I am using the hadd binary to add files together, here is the LSF output I get:

... Source file 992: /nas02/depts/physics/enpa/data/malbek/2011/08/Data/tier3/tier3_proc_malbek_filt_run11820.root Source file 993: /nas02/depts/physics/enpa/data/malbek/2011/08/Data/tier3/tier3_proc_malbek_filt_run11821.root Source file 994: /nas02/depts/physics/enpa/data/malbek/2011/08/Data/tier3/tier3_proc_malbek_filt_run11822.root Source file 995: /nas02/depts/physics/enpa/data/malbek/2011/08/Data/tier3/tier3_proc_malbek_filt_run11823.root Source file 996: /nas02/depts/physics/enpa/data/malbek/2011/08/Data/tier3/tier3_proc_malbek_filt_run11824.root SysError in <TFile::TFile>: file /nas02/depts/physics/enpa/data/malbek/2011/08/Data/tier3/tier3_proc_malbek_filt_run11824.root can not be opened for reading (Too many open files)

This doesn’t make much sense to me, since it got all the way to file 996, but when I check my limit, it is 99:

$ ulimit -n 99

Thanks!

  • Padraic

[quote]I am running on a cluster and I don’t have permissions to change the limit, is there an alternative solution?[/quote]If the system administrator has been an hard limit on the number of open files, there is nothing that can be done about it (short of asking the admin to increase the limit) and the code will need to be updated to require less opened file at once. For example, the code could be changed to open the file only when needed and close them as soon as you get close to the limit.

[quote]This doesn’t make much sense to me, since it got all the way to file 996, but when I check my limit, it is 99:[/quote]There is two limit (see ulimit -S and ulimit -H), there is a soft and a hard limit. The soft limit that can be increased up to the hard limit.

Cheers,
Philippe.

I meet this same problem when trying open and read many root files in a loop, the problem happened when read ~250 root files.


It solves my problem by ulimit -n 2048 under my macOS and I’m using

ROOT Version: 6.22/06
Built for macosx64 on Nov 27 2020, 15:14:08