I found sometimes if I used a very large number of events to fill in the histogram, my program crashed, and the root file produced was not completely saved. If I decrease the number to a small one, no problem to produce the root file.
I guess the reason is that the number of entries either in the bin or in the histogram has an upper limit.
Who knows the upper limit of the number of entries in one bin and in the whole histogram, i.e, for the TH2D?
You don’t say how your program crashes, but … If I were you I would start to search for “memory leaks” in the procedure that fills histograms.
Try to run your code using valgrind (and carefully study messages that appear in the beginning of the output):
The number of entries in each channel does not matter. The channels store the sum of the entries. So Wile is right, it is the précision of each bin which matter. If your bin contents (sum of the entries) exceed the precision of a float of a double (depending if you use TH1F or TH1D) you may have problems.
In general, the operating system imposes some “user limits”. Try:
[bash]$ ulimit -S -a
[bash]$ ulimit -H -a
[tcsh]$ limit
[tcsh]$ limit -h
If any application, while running, exceeds any of these limits, it will be killed.
I can imagine that your program eats more and more “resources” while running. For example, if there’s a “memory leak” somewhere in the procedure that fills your histograms, then the RAM usage will increase with every new event … and finally it may even exceed the total RAM amount of your machine.
Try to run “top” (observe the “VIRT” / “RES” / “SHR” / “%MEM” fields), in another terminal window, in order to find how much resources your program is actually using while running.