I have written a rather complicated program for systematic uncertainty propagation. It opens 32 files and then reads a large number of histograms. The program runs as hoped, but before the descructors are called the program seems to get stuck in automatic gROOT.CloseFiles() call(s). The active part of the program takes 4 seconds to run, but the automatic gROOT calls take 1 minute and 3 seconds. After the 1minute and 3 seconds, the destructors of the objects in the active program are called. Since all of the files are being closed in the active part of the program, is there anyway to disable the automatic clean up in gROOT? (The long pause is present even if the active part of the program crashes. The python profiler and its default options do not list the process which is running during the 1 minute and 3 seconds. However, if the process is killed using kill -9, then it lists gROOT.CloseFiles())
ROOT version: 5.32
OS: OSX 10.6.8
Thanks and best regards,
I encountered similar-sounding hangs in one of my programs. The solution was to manually .Close() the TFiles. I kept a python dictionary or list of all my TFile handles so it was quite easy to do:
[f.Close() for f in myfiles]
My original post about it is here: [url]Python doesn't exit after main function
Thanks for the link to your previous report. I tried closing the files in a function rather than in the destructor, but this has the same affect. ROOT eats up the CPU for over a minute, until finally the program finishes. wlav seems to suggest this is normal when a lot of files and associated pointers are involved.
I have the same problem and always do:
[url]Long processing time for TFile::Close() function
expected yes, normal no. The problem is that RecursiveRemove is a kludge solution from a different era. Who knows, maybe once we’re all C++11, a real garbage collector can be used instead. For interactive use, that would definitely be the way to go.