Hi Giordano,
I’ve run your code through the debugger, and I see what the problem is.
If you define an input p.d.f with three observables, the resulting output
p.d.f is also three-dimensional, and is cached in a three-dimensional
histogram. Here is where you start to run into trouble: 10000 x 100 x 100 bins
= 100 million bins, so that’s a lot of memory! I’m running on a dual-quad core
with 16G of real memory, and I see that memory consumption goes up to about
7.8 Gb, which will cause a crash (with std__badalloc) on most machines.
Now concerning the solution: I need to think a bit. I could easily provide an option
to cache less non-convolved dimensions in RooFFTConvPdf (a similar mechanism
to make this optional already exists for RooLinearMorph, which inherits as well
from RooAbsCachedReal). That will for sure reduce the memory footprint. It
comes however at the expense of CPU time. If I only cache the convolved dimension,
the cache validity will become dependent on the value of the other two observables,
which will likely change every event. If the total number of events to be generated
is nevertheless small compared to the number of bins in the other dimensions,
it should however be more efficient anyway.
I will provide this flag shortly in RooFFTConvPdf (will send you a private copy to try out).
Meanwhile you might see what happens if you reduce the binning granularity with
3 dimensions (e.g. 100 x 100 x 100), that will likely make it work out of the box
(I see that it works on my machine, but I did not let the entire study run to its end)
I also really should put out a warning in RooDataHist whenever a histogram with
more than 10M bins is allocated…
Wouter