Hello,
I’m wondering what should happen with the mean error of the distribution when the distribution is scaled.
Here is a simple example:
{
TH1D* h = new TH1D(“h”,“h”,10,0,10);
h->Sumw2();
h->Fill(4, 2);
h->Fill(5);
h->Fill(6);
Stat_t stats_h[4] = {0.};
h->GetStats(stats_h);
for (int k = 0; k < 4; k++){
cout<<" stats_h["<<k<<"]="<<stats_h[k]<<endl;
}
cout<<"entries: "<GetEntries()<<endl;
cout<<"mean: "<GetMean()<<endl;
cout<<"RMS: "<GetRMS()<<endl;
cout<<“meanError: “<GetMeanError()<<endl;
cout<<”\n”<<endl;
TH1D* hclone=(TH1D*)h->Clone(“hclone”);
hclone->Scale(1./10);
Stat_t stats_hclone[4] = {0.};
hclone->GetStats(stats_hclone);
for (int k = 0; k < 4; k++){
cout<<" stats_hclone["<<k<<"]="<<stats_hclone[k]<<endl;
}
cout<<"entries: "<GetEntries()<<endl;
cout<<"mean: "<GetMean()<<endl;
cout<<"RMS: "<GetRMS()<<endl;
cout<<“meanError: “<GetMeanError()<<endl;
cout<<”\n”<<endl;
TCanvas *c1 = new TCanvas();
c1->Divide(1,2);
c1->cd(1);
h->Draw(“E1”);
c1->cd(2);
hclone->Draw(“E1”);
}
This is how the output looks like:
stats_h[0]=4
stats_h[1]=6
stats_h[2]=19
stats_h[3]=93
entries: 3
mean: 4.75
RMS: 0.829156
meanError: 0.414578
stats_hclone[0]=0.4
stats_hclone[1]=0.06
stats_hclone[2]=1.9
stats_hclone[3]=9.3
entries: 3
mean: 4.75
RMS: 0.829156
meanError: 1.31101
Take a look at the mean error: it is 0.41 for the unscaled histogram,
and 1.31 for the scaled histogram.
I expect to have the same mean in the 2 cases, but then I don’t understand the different errors.
Is this the way it is supposed to be?
Thank you in advance,
Angela.