I found sometimes if I used a very large number of events to fill in the histogram, my program crashed, and the root file produced was not completely saved. If I decrease the number to a small one, no problem to produce the root file.
I guess the reason is that the number of entries either in the bin or in the histogram has an upper limit.
Who knows the upper limit of the number of entries in one bin and in the whole histogram, i.e, for the TH2D?
Thanks a lot.
TH2F bins are “Float / Single precision floating-point numbers” (4 Bytes = 32 bits)
TH2D bins are “Double precision floating-point numbers” (8 Bytes = 64 bits)
See also: Wikipedia - Floating point
You don’t say how your program crashes, but … If I were you I would start to search for “memory leaks” in the procedure that fills histograms.
Try to run your code using valgrind (and carefully study messages that appear in the beginning of the output):
valgrind --tool=memcheck --leak-check=full [--show-reachable=yes] [--num-callers=50] [--track-origins=yes] [--db-attach=yes] --suppressions=`root-config --etcdir`/valgrind-root.supp `root-config --bindir`/root.exe -l -q 'YourMacro.cxx[++][(Any, Parameters, You, Need)]'
valgrind --tool=memcheck --leak-check=full [--show-reachable=yes] [--num-callers=50] [--track-origins=yes] [--db-attach=yes] --suppressions=`root-config --etcdir`/valgrind-root.supp YourExecutable [Any Options You Need]
(Note: the “–show-reachable=yes” option will give you too many warnings, I believe.)
Thanks very much!
But I am asking the limit of the number of entries in each channel (bin), not the precision of the value in each bin.
Yes, in the user guide, I know the TH1D or TH2D is filled with one double per channel, and the maximum precision is 14 digits.
I used Geant4 with root, and my program stopped quietly without any errors or memory leaking warnings.
I will try your method to detect the errors.
The number of entries in each channel does not matter. The channels store the sum of the entries. So Wile is right, it is the précision of each bin which matter. If your bin contents (sum of the entries) exceed the precision of a float of a double (depending if you use TH1F or TH1D) you may have problems.
In general, the operating system imposes some “user limits”. Try:
[bash]$ ulimit -S -a
[bash]$ ulimit -H -a
[tcsh]$ limit -h
If any application, while running, exceeds any of these limits, it will be killed.
I can imagine that your program eats more and more “resources” while running. For example, if there’s a “memory leak” somewhere in the procedure that fills your histograms, then the RAM usage will increase with every new event … and finally it may even exceed the total RAM amount of your machine.
Try to run “top” (observe the “VIRT” / “RES” / “SHR” / “%MEM” fields), in another terminal window, in order to find how much resources your program is actually using while running.
It is indeed the memory’s problem.
I changed to a node with large memory (1 TB), there was not any problems in running my problem.