_ROOT Version: 6.19

_Platform: OS 10.14.3

I include all the files and code at the end.

I have 2 files of “raw histograms” of measurements taken under different conditions, but the amount of data points is exactly the same.

I have also some “noise histograms” which is the same for both.

I add the raw histograms and subtract the noise like this:

```
hHist.Add(hNoise,-1.0*n)
```

where n is just a scale factor. If I took 20 rounds of measurements for the noise but 15 for the actual measurements then the noise gets normalized and multiplied by 15 to match the scale of the measurements.

Everything was working fine when I noticed that one of them had way better results, I thought it was just the measurement conditions until I noticed the number of entries, one has easily 10 times more than the other.

It was my understanding that if I have 2 histograms with 10 entries and you add them up you get one with 20, so I would expect both of them to have the same number of entries. However this is just the case sometimes.

To investigate this I decided to print the number of entries after each step in the process, this is what I got (“sumando” means adding up in spanish):

First data set

sumando

3647.0

7294.0

10941.0

14588.0

18235.0

21882.0

25529.0

29176.0

32823.0

32823.0

noise entries

32823.0

data entries

32823.0

after noise subtraction

290765.26725

Second data set

sumando

3647.0

7294.0

10941.0

14588.0

18235.0

21882.0

25529.0

29176.0

32823.0

32823.0

noise entries

32823.0

data entries

32823.0

after noise subtraction

1301369.43743

As you can see after the noise is subtracted the amount of entries changes drastically, it even becomes fractionary. I do not understand this at all. I have been reading the documentation, but I don’t see anything about this.

What am i doing wrong?

testtest 2.zip (1.5 MB)