I’m having troubles when fitting a normalised (to 1) tf1 function to a normalised (to 1) th1f histogram.
My function is normalised by definition since the normalisation constant itself depends on the parameters of the function, but the Integral of the normalised function from x_min to x_max is 1 independently from the value of the parameters.
It seems that when I’m using this function to fit a normalised histogram, the tf1 normalises the function with some hidden normalisation constant that seems to depend on the number of bins of the histogram I’m fitting and x_max - x_min. Is it possible? and If yes how do I find out what this normalisation constant is and from what it is computed?
Digging in TF1 source code I read about the fNormalized boolean variable and SetNormalized(True). Once I executed those commands in my python fitting file I run IsEvalNormalized() to check wether my function is evaluated as a normalized function. Why do I get
TF1::IntegralOneDim:0: RuntimeWarning: Error found in integrating function f1g in [0.000000,1.000000] using AdaptiveSingular. Result = nan +/- nan - status = 11
when the tf1 function is a simple (Norm(,))*x^2 +  function?