Dear all,

I’m performing 10 independent Monte Carlo simulations, each of them gives me a histogram. Then I ‘**hadd**’ these 10 histograms in order to have a single total histogram (let’s call it Htot).

With ‘**hadd**’ I automatically have the mean and the standard deviation of that Ht and now I’m wondering how I could have the error on the standard deviation of Ht ? In order to have error bars on the StdDev.

Moreover how the ‘**hadd**’ command is calculating the new standard dev and the new mean from the 10 aforementioned histograms. Is it simply a sum of each StdDev of each histograms ? And a sum of the means divided by 10 ?

For now I’ve calculated by hand the error as the fourth moment but it’s very long and I’m not sure anymore of what I’m writing down… anyway I hope there is an “automatic” way of doing that.

I hope I’m concise enough, if not I’ll give more details.

Thanks,

Pierre