Dear Root Experts,
The following simplified code illustrates:
TProfile *temp1 = new TProfile(“temp1”,“temp1”,1,0.,1.,“s”);
TProfile *temp2 = new TProfile(“temp2”,“temp2”,1,0.,1.,“s”);
The only difference between two profiles is weight (10. vs 100.). I was expecting spread to be 0 in both cases (as is should by definition), but what I got is:
Is it possible to have default protection against such behaviour? Such anomalously small spread causes trouble for instance if one wants to rebin and use 1/sigma^2 as a weight, when I am not sure which “smallest possible positive spread” should be taken into account.
P.S. ROOT 5.27/06b (tags/v5-27-06b@36516, Nov 07 2010, 14:54:49 on linuxx8664gcc)
Consider the following case: When analysis is being performed in distributed mode (for instance on Grid or on CAF), it regularly happens that at some worker node with small allocated data sample there is a TProfile which has only 1 entry in a certain bin, and correspondingly “anomalously small spread” might be occurring for that bin.
For merging I am using TFileMerger utility and after merging it seems that “anomalously small spread” survives, although now that particular TProfile bin has a lot of entries and should have well behaved spread.
Is it possible somehow to cure this problem within TFileMerger implementation itself?
Thanks and cheers,
your anomalous spread are caused by numerical error. If you always have the same double values in y you might get this, due to numerical error in the spread calculation, even if you have more than one entries.
You could try to use TProfile::Approximate(true) or use a cut off in your 1/sigma^2 calculation afterwards.
Numerical error cause relative spread at the ~10E-8 order