Why shouldn't TRandom::Binomial be used for ntot>100?

Hi rooters,

I’ve been using TRandom::Binomial(Int_t ntot, Double_t prob) for generating mock data sets, where the number of total events is fixed and sig/bkg ratio is binomially fluctuated.

I noticed this in the root documentation for this function:
Note: This function should not be used when ntot is large (say >100).
The normal approximation is then recommended instead

Could you please tell me why this function is not good for ntot > 100? For my test with ntot = 737 and sig_frac = 0.4, the output distribution looks ok with mean ~ np and RMS ~ np(1-p). Is there some other reason that this function should not be used for ntot > 100?

Also, 100 seems to be an arbitrary number. How is it chosen?

Thanks in advance.

Xiaowen

Hi,
The note in the documentation is probably too strong. The routine is correct, just like it will take a longer time to generate binomial numbers, since n random numbers need to be generated.
A more efficient algorithm using inversion should be added for the case of large n.

Best Regards

Lorenzo

Thanks you for your reply!

[quote=“moneta”]Hi,
The note in the documentation is probably too strong. The routine is correct, just like it will take a longer time to generate binomial numbers, since n random numbers need to be generated.
A more efficient algorithm using inversion should be added for the case of large n.

Best Regards

Lorenzo[/quote]