TF2 fast integral

I would like a faster method of integrating TF2 - same as FastIntegral for TF1. As far as I can see, FastIntegral does not work for TF2.

So I decided to use TF12 - integrate with FastIntegral every slice in y for given x of my TF2, than use FastIntegral on the result, to integrate in x.

It works. Much more slowly than normal integration. I haven’t investigated how FastIntegral really works, but I would appreciate any info if the method could be made faster, or if there any method of improving speed of TF2 integral…

I attach 3 scripts - par.cpp is normal Integral(), par1.cpp is using slices with TF12, but still integrating with Integral(), par2.cpp uses slices and FastIntegral().
par2.cpp (1.01 KB)
par1.cpp (763 Bytes)
par.cpp (492 Bytes)


the adaptive integration method used in TF2::Integral is a quite efficient integration method, I think will be tough to implement something faster for a function in 2 dimension.
If you want to be faster, first of all you should implement your functions in the most efficient way.
I see that you use expressions, which are much slower than compiled code and also in your other scripts, you allocate TF1 objects inside the function evaluation method. This must is really inefficient !!!


Those scripts were just a quick example. My real code, I hope, is better and the integral is more complicated (I’m integrating to get fresnel diffraction patterns). And of course in my code it takes a much longer time to integrate…

You mentioned that TF2 uses adaptive integral method. I saw an example of Adaptive Integral in 1D in ROOT:Math. Is it the same thing?

The 1D code in ROOT::Math is based on GSL. It is a good and efficient algorithm but works only for 1D. For 2D we have MC integration (but it is much more inefficient) or the adaptive multi-dimensional integration that is used in TF2.
The only solution is, if you can, use some analytical integration.
I think going through slices will be in any cases much more inefficient

Best regards


Thank you for your ideas. So I’ll stick with the normal integration and try somehow to optimize the code…

Yes, optimize the code evaluating the function will be the optimal strategy