_ROOT Version: 6.32.06
Platform: Linux
Compiler: gcc
Hi all,
I’ve been working on Energy distribution of muons that I plotted using logarithms (base 10), and I have a couple of questions regarding how to interpret and convert the results back to the linear scale.
I understand that the mean in the log scale can be converted back to the linear scale by taking the antilog (i.e., 10^(mean value)). However, I’m a bit uncertain about how to handle the standard deviation in this case.
From what I understood, it seems that we cannot simply convert the log standard deviation (SD) to the linear scale by taking the antilog in the same way we do with the mean.
Could you please clarify the correct way to interpret or convert the standard deviation from the log scale to the linear scale (GeV)?
Thanks.