However, I get an error after the 220th file or so. Here is the actual message:
chaining run with source data: ana_200.root
chaining run with source data: ana_220.root
SysError in <TFile::TFile>: file ana_221.root can not be opened for reading (Too many open files)
SysError in <TFile::TFile>: file ana_222.root can not be opened for reading (Too many open files)
SysError in <TFile::TFile>: file ana_223.root can not be opened for reading (Too many open files)
SysError in <TFile::TFile>: file ana_224.root can not be opened for reading (Too many open files)
SysError in <TFile::TFile>: file ana_225.root can not be opened for reading (Too many open files)
SysError in <TFile::TFile>: file ana_226.root can not be opened for reading (Too many open files)
SysError in <TFile::TFile>: file ana_227.root can not be opened for reading (Too many open files)
SysError in <TFile::TFile>: file ana_228.root can not be opened for reading (Too many open files)
SysError in <TFile::TFile>: file ana_229.root can not be opened for reading (Too many open files)
SysError in <TFile::TFile>: file ana_230.root can not be opened for reading (Too many open files)
SysError in <TFile::TFile>: file ana_231.root can not be opened for reading (Too many open files)
etc...
All I need is to have ~450 files open at the same time. Is there a switch I could turn on/off to achieve the desired affect. Here is the ouput for:
sysctl -a | grep files
kern.exec: unknown type returned
kern.maxfiles = 65536
kern.maxfilesperproc = 30720
Many thanks in advance
erdos
OSX 10.4.11
Root 5.18 (tried on 5.17.x, 5.16.x)
macosx64
I should note I am using the hadd binary to add files together, here is the LSF output I get:
...
Source file 992: /nas02/depts/physics/enpa/data/malbek/2011/08/Data/tier3/tier3_proc_malbek_filt_run11820.root
Source file 993: /nas02/depts/physics/enpa/data/malbek/2011/08/Data/tier3/tier3_proc_malbek_filt_run11821.root
Source file 994: /nas02/depts/physics/enpa/data/malbek/2011/08/Data/tier3/tier3_proc_malbek_filt_run11822.root
Source file 995: /nas02/depts/physics/enpa/data/malbek/2011/08/Data/tier3/tier3_proc_malbek_filt_run11823.root
Source file 996: /nas02/depts/physics/enpa/data/malbek/2011/08/Data/tier3/tier3_proc_malbek_filt_run11824.root
SysError in <TFile::TFile>: file /nas02/depts/physics/enpa/data/malbek/2011/08/Data/tier3/tier3_proc_malbek_filt_run11824.root can not be opened for reading (Too many open files)
This doesn’t make much sense to me, since it got all the way to file 996, but when I check my limit, it is 99:
[quote]I am running on a cluster and I don’t have permissions to change the limit, is there an alternative solution?[/quote]If the system administrator has been an hard limit on the number of open files, there is nothing that can be done about it (short of asking the admin to increase the limit) and the code will need to be updated to require less opened file at once. For example, the code could be changed to open the file only when needed and close them as soon as you get close to the limit.
[quote]This doesn’t make much sense to me, since it got all the way to file 996, but when I check my limit, it is 99:[/quote]There is two limit (see ulimit -S and ulimit -H), there is a soft and a hard limit. The soft limit that can be increased up to the hard limit.