TF1::Integral

Hello.

Use my own precompiled normalized Gaussian. (See below)

//Create TF1.
NormFunc = new TF1(“normFunc”, normGaus, 4.6, 6.0 , 3);
NormFunc->SetParameters(46.8974, 5.28516, 0.0082587);
Double_t norm = NormFunc->Integral(4.6, 6.0);
cout<<“Norm:”<<norm<<endl;

// Output:
// Norm:9.53183e-07

// If I change it to
// Double_t norm = NormFunc->Integral(5.0, 6.0);
// I get the correct output:
// Norm:46.8974
// NormFunc->SetNpx(1000) does not help.
// Seels like integrator misses the spike.
// How do I change number of integration points?
// Great thanks.

//////////////////////////////////////////////////////////////////////////////////////
Double_t normGaus(Double_t *x, Double_t *par){

double xx= x[0];

double norm = par[0];
double mean = par[1];
double sigm = par[2];

const double _invSq2Pi = 1.0 / sqrt( 2.0 * TMath::Pi());

double res = norm * _invSq2Pi /sigm * TMath::Exp( -0.5 * (xx - mean) * (xx -mean) /sigm /sigm );

return res;
}

Your function normGauss was already available in TMath::Gaus when specifying the last argument norm=kTRUE.

In TFormula/TF1, I have added a new keyword “gausn”. (now in CVS)
When “gausn” is specified, you get now a normalized gaussian. For example, you can do:
myhist->Fit(“gausn”);
as an alternative to
myhist->Fit(“gaus”);

Rene

Hello.

Thanks for quick answer.
Sure, II’ll try to use Math::Gauss.

My problem seems a bit different. When the integration
limits are much wider than the gaussian, the integrator
misses the spike. If I change the limits a bit, Integral returns
the right number.I could not find the way to increase the number
of integration points manually . TF1::SetNpx() only
affects the drawing output not the integration process.

Is there any way to go around?

Thanks.

Boris.

Hi Rene,

I am experiencing a problem with TF1::IntegralMultiple(Int_t n, const Double_t* a, const Double_t* b, Int_t minpts, Int_t maxpts, Double_t epsilon, Double_t& relerr, Int_t& nfnevl, Int_t& ifail)

(The quoted example below is similar to what I think I am experiencing.)

I actually have a 3-dim integral, so am using TF3. Root version 4.04/02d
on a linux machine with gcc version 3.4.3.

My integrand is very complex – it has at least one sharp peak and the bounds are very wide. Unfortunately, I cannot easily provide you with an example. I’ve been setting the minpts to 10000, maxpts to 500000 and epsilon to 0.001.

This integral is evaluated at every point in a likelihood curve. For some events, there are no problems – the relerr is below epsilon and ifail is 0.
These events have very nice, smooth curves.

However, in many events the integration does not seem to work properly.
For some of the likelihood points, the relerr never reaches epsilon (even if I increase the maxcalls significantly). When this happens, ifail is set to 1, which is the expected behavior. Also, in some points, relerr does not reach epsilon, but the maxcalls are not reached and ifail is set to 0. Integration stops after only calling the integrand a few 100 times, generally far below minpts.

Why does the integration bail out before reaching epsilon or maxpoints – even without calling minpts?

In the events where I have problems, I’ve histogrammed where in the integrand space the integration looks, and I can tell it is not finding the peak. As I mentioned, I have a sharp peak with very wide bounds. Is it possible to somehow reduce the stepsize so that the integration can find the peak?

Thanks,
Brian

[quote=“brun”]
//Create TF1.
NormFunc = new TF1(“normFunc”, normGaus, 4.6, 6.0 , 3);
NormFunc->SetParameters(46.8974, 5.28516, 0.0082587);
Double_t norm = NormFunc->Integral(4.6, 6.0);
cout<<“Norm:”<<norm<<endl;

// Output:
// Norm:9.53183e-07

// If I change it to
// Double_t norm = NormFunc->Integral(5.0, 6.0);
// I get the correct output:
// Norm:46.8974
// NormFunc->SetNpx(1000) does not help.
// Seels like integrator misses the spike.
// How do I change number of integration points?
// Great thanks.

Rene[/quote]

Hi Brian,

It’s hard to say why the integration stops too early without an example when it does. In general, I can only say that the algorithm doesn’t work as a grid search with a constant step size, so you can’t set it to a smaller value. Can’t you divide the integration space into smaller blocks, to make the peak more “visible” for the algorithm?

anna

Hi Anna,

Thanks for the reply.

I have a grid search in mind because I also use GSL’s VEGAS routine to perform this integration. This works well, but it a bit harder to tune for speed. The ROOT integration is 10x faster when it works correctly.

I could subdivde the integration space as you suggest, but this will most likely impact the speed of integration too much to be useful – I will think about it.

I’ve attempted to reproduce this problem (unsuccessfully) with a small macro. I do however see behavior that may be related to this issue. In the attached file, I do a 2D integration over a double gaussian. I make a cut inside the integrand on anything outside of a circle of a given radius.

Running the macro performs the integration over several radii, with the output:

cut 200 val 1 relerr 1.46826e-06 nevl 10013 ifail 0
cut 13 val 0.520605 relerr 4.05209e-05 nevl 10013 ifail 0
cut 12.9 val 0.276897 relerr 9.07219e-06 nevl 10013 ifail 0
cut 12.8292 val 0.000169624 relerr 2.12654e-11 nevl 10013 ifail 0
cut 12.5 val 0 relerr nan nevl 999991 ifail 1

This, at least, exhibits the issues with wide integration bounds. You can see that small variations of the cut radius around 13 greatly impact the integration result. Of course, it would be better to set the bounds more appropriately, but this is extremely difficult (if not impossible) for my integrand.

I appreciate any help you can give on this. It may be that this integration is not well suited for my purposes, and I will continue with VEGAS or other options.

Thanks,
Brian
IntegralMultiple.C (1.6 KB)

Hi, Brian,

Yes, the algorithm is not very stable as to moving the limits around. But at least here it gives an error message. It would be very interesting to look at the case when it exits before reaching the maxpoints, with ifail=0 and a wrong result.
In general, I’d say that for very wide limits and sharp peaks in unexpected places you’d probably have to use Vegas - the algorithm in IntegralMultiple is not suited for your case :frowning: That, of course, if breaking up the integration space doesn’t help or gives too much overhead in time.
There are plans for adding some GSL functions to Root as a separate optional library in one of near future releases. Maybe that would make your life a little easier.

Anna,

Thanks again for looking at this.

Unfortunately, the code performing the integration is tied to my analysis code and cannot run outside of the CDF environment. I’m not sure that I can provide you a copy of this code – at least not one that you would be able to test and compile yourself, etc.

I was taking advantage of ROOT’s wrapper around the fortran cernlib integration in using IntegralMultiple. It was no trouble for me to implement gsl’s integration – though as you say, this may be helpful for some users.

Brian