Help on Method/s to create Tree from multi-fold gamma-ray coincidence data

Okay. Thank you once again.

Wow! That’s great.

I have modified the code so that the histograms are filled in a specific way. I am attaching the modified version of the code, just in case you plan to work on it further.

Thank you very much.

can2root.cxx (43.7 KB)

I think we have a small problem:

I have compiled the code with -D_ZERO_SUPPRESSION_LEVEL_=2, and then ran it to generate a root file.
After this, when I dumped “non-zero” entries only, I still get zeros as shown below:

1 : 1 : 1 : ( 8 : 0 , 458 ) <======== here CLT[8] = 0

75 : 1 : 2 : ( 2 : 2885 , 0 ) ( 12 : 2902 , 245 ) <======== here Eclab[2] = 0

Ideally, all those events are valid in which both CLT[j] and Eclab[j] are non-zero.

Attaching, “modified” version of the code again.

Ajay

can2root.cxx (43.7 KB)

You seem to miss the point … see what’s after the line #if (_ZERO_SUPPRESSION_LEVEL_) > 1.

If you want to require that both “CLT” and “Eclab” are non-zero, replace the line (“pass” if non-zero “CLT” OR “Eclab”):

if ((CLT[n] == 0) && (Eclab[n] == 0)) continue;

with (“pass” if non-zero “CLT” AND “Eclab”):

if ((CLT[n] == 0) || (Eclab[n] == 0)) continue;

BTW. Note that “clov_mult” seems to be strictly related to non-zero “Eclab” only (i.e. it doesn’t care about “CLT”).

You are perfectly right! Not including “CLT” while evaluating “clov_mult” was logically wrong, however won’t create any artifact in histograms. Nevertheless, I have further modified the code to take care of this issue. The modified code is attached.

I have also checked entries in a gamma-gamma matrix created using the sorting code and from tree. They are matching exactly. This confirms the reliability of the usage of tree for further data analysis. I am also attaching the macro which I used to create 2D gamma-gamma histogram from the tree.

BTW, how big a tree can be in size? Is there any limitation?

Further inputs to improve the code and/or usage of ROOT are highly appreciated.

Thanking you.

Ajay

can2root.cxx (43.4 KB)

canSortTTreeDump.cxx (3.7 KB)

Attached is a slightly improved source code (includes fixes for TSpectrum2 incompatibilities between ROOT 5 and 6).

BTW. There are hundreds of events which have “CLT” of about 1.844e+19. This seems to me some kind of an “overflow” so, I would remove these events.

can2root.cxx (43.8 KB)

1 Like

Attached is a code which will take care of the TDC "overflow"issue while generating tree. Also, it will identify the number of occurrences when CLT[n] has overflowed.

Today, whole day, I spent understanding a weird (or is it normal?) behavior described below:

Please take a look at line no. 1064 of the attached code. If there is “.0” after “8192” it results in a histogram wherein all the entries with negative value of “(CLT[j] - CLT[k])” are missing i.e. it gives only “right” part of the TAC spectrum. However, this problem is resolved if I remove “.0” and then the spectrum is symmetric as expected. This problem was NOT there when I was sorting the data in the RADWARE format.

Do you know where is the problem?

You can run the code as follows:

./can2root IITR_Eu152_10Jul16_2.001 test.root
“y”
“1”
“n”
“n”
“test_10Jul_2.001.dat”

Then have a look at “TACspectra”

can2root.cxx (43.9 KB)

It seems to me that you are challenging C/C++ math here.
All relevant variables (i.e. “Time_Diff” and “CLT[16]”), are “unsigned long” (which means non-negative values only).
So, “(CLT[j] - CLT[k])” is fine as long as “CLT[j] >= CLT[k]”.
I do not know now if the C/C++ standard says how its math is expected to behave when “CLT[j] < CLT[k]” (maybe @pcanal and/or @Axel know it).

BTW. That’s possibly also why you have these 1.844e+19 “overflows” sometimes. If one interprets this “unsigned long” value as a “signed long” value, one gets a small negative number (unsigned long 18446744073709551615 = hex 64bits 0xffffffffffffffff = signed long -1).

{
  unsigned long CLT[2] = {1, 2}; // try with {1, 2} and then with {2, 1}
  std::cout << "CLT[0] = " << CLT[0] << std::endl;
  std::cout << "CLT[1] = " << CLT[1] << std::endl;
  unsigned long Time_Diff;
  Time_Diff = (CLT[0] - CLT[1]);
  std::cout << "(CLT[0] - CLT[1]) = " << Time_Diff << std::endl;
  Time_Diff = double(CLT[0] - CLT[1]);
  std::cout << "double(CLT[0] - CLT[1]) = " << Time_Diff << std::endl;
  Time_Diff = 1000 + (CLT[0] - CLT[1]);
  std::cout << "1000 + (CLT[0] - CLT[1]) = " << Time_Diff << std::endl;
  Time_Diff = 1000.0 + (CLT[0] - CLT[1]);
  std::cout << "1000.0 + (CLT[0] - CLT[1]) = " << Time_Diff << std::endl;
  Time_Diff = (CLT[1] - CLT[0]);
  std::cout << "(CLT[1] - CLT[0]) = " << Time_Diff << std::endl;
  Time_Diff = double(CLT[1] - CLT[0]);
  std::cout << "double(CLT[1] - CLT[0]) = " << Time_Diff << std::endl;
  Time_Diff = 1000 + (CLT[1] - CLT[0]);
  std::cout << "1000 + (CLT[1] - CLT[0]) = " << Time_Diff << std::endl;
  Time_Diff = 1000.0 + (CLT[1] - CLT[0]);
  std::cout << "1000.0 + (CLT[1] - CLT[0]) = " << Time_Diff << std::endl;
}
1 Like

Oh! I see.
Anyway, thank you for clearing my doubt, and all the help. I owe a lot to you for your guidance, support and teaching me several aspects of ROOT.

I want to move a small step forward with the data analysis. I have written a small macro (attached) to generate a “gamma-gamma” matrix from a ROOT Tree. This works great for a single ROOT file.

But, as I have mentioned in my very first post that I have about 100 data files which I cannot convert into a single root file. Hence, I will be having multiple ROOT files with same Tree structure, which I want to use further to generate “single” 2-D gamma-gamma histogram using the attached macro.

Can you please help/guide me in this? Let’s assume that I have 2 ROOT files (attached).

Regards,

Ajay

gen_MatGG.cxx (2.9 KB)

ROOT File1: https://www.dropbox.com/s/lsafwzxc0lkqgrf/test1.root?dl=0
ROOT File 2: https://www.dropbox.com/s/taiavbi15km3ukl/test2.root?dl=0

In general, you could create as many ROOT files with histograms as you have ROOT files with trees and then “combine” your histograms using hadd (run hadd -? for help).

You can also use a TChain to analyse all your ROOT files with trees (and create a single ROOT file with histograms).

gen_MatGG.cxx (3.2 KB)

Note that the “his2D” histogram is filled two times with every pair of “Eclab” values. So, maybe after the loops you should execute his2D->Scale(0.5); or you could simply use his2D->Fill(x, y, 0.5); instead.

BTW. See also:

Hello,

Thank you very much for you for reply, much appreciated.

Please note that - I am intentionally filling “his2D” twice, so that it will be symmetric. That is the usual practice in gamma-ray spectroscopy. In the macro that you have attached, I have not understood why have you added “0.5” to Eclab[j] and Eclab[k] before filling “his2D”. Is it really required?

As a next step (and almost final one!) of the analysis is to generate 3D histogram (possibly symmetric as 2D), which I tried by modifying our earlier “gen_MatGG.cxx” macro to “gen_Cub.cxx” (attached). You will notice that I have reduced dimensions to 2048 channels. When I run the macro the ROOT crashes. Please find the detailed crash report in the attached file. I have checked - if I reduce the 3D-histogram dimensions to “500” the it works fine. But a 3D-histogram with only 500 channels is of no use to me. Further, if I increase dimensions even to “512” the ROOT throws following error:

Error in TRint::HandleTermInput(): std::bad_alloc caught: std::bad_alloc

What is going wrong? How to rectify this problem?

Currently I am using ROOT 6.14/00 on 32-bit Fedora 20.

Thanking you once again.

Ajay

gen_Cub.cxx (3.6 KB)

CrashReport.txt (7.6 KB)

I understand that you want to make your “his2D” histogram “symmetric”. I just say that you should maybe care about the “statistics” of your histogram, too (unless you do not need / use it afterwards at all). If you use the same pair twice, your “statistics” is artificially increased (doubled). So, you either should fill the histogram explicitly setting the “weight = 0.5” (i.e. his2D->Fill(x, y, 0.5);), or you should “scale” the histogram by a factor 0.5 afterwards (i.e. his2D->Scale(0.5);).

Well, I assume that your “Eclab” values come from some kind of an ADC converter so they are actually “discrete integer values”. I guess the real “average signal height” value for every ADC channel should be increased by the half of the ADC channel’s width (i.e. 0.5). This is not strictly needed of course. It’s up to you if you want to apply this correction.

There exists, however, some problem with bins’ edges calculation in ROOT (coming from floating point “rounding errors”) which you can easily overcome if you add some small “shift” to the integer ADC values. This “shift” does not need to be 0.5 of course. A small value like 1.e-6 (or even 1.e-9) would be sufficient:

double x = (Eclab[j] + 1.e-6), y = (Eclab[k] + 1.e-6);

In general, a TH3 histogram uses at least (nbins_x + 2) * (nbins_y + 2) * (nbins_z + 2) * bytes_per_cell bytes (or the double of this amount if TH1::Sumw2 is effective).
So, for a “cubic” TH3F (a “single-precision floating-point” value uses 4 bytes) and nbins_x = nbins_y = nbins_z = 2048 this would be at least 32GB (or 64GB) RAM.

However, because ROOT uses “Int_t” (32 bit signed integer) values for “global bin” numbers, you must make sure that (nbins_x + 2) * (nbins_y + 2) * (nbins_z + 2) <= 2147483647 (e.g. that would be nbins_x = nbins_y = nbins_z <= 1288).

That said, for CubDim = 512, your “his3D” will use at least 518MB (or 1.01GB) RAM.

Try to check:
[bash]$ ulimit -S -a
[bash]$ ulimit -H -a
[tcsh]$ limit
[tcsh]$ limit -h
and make sure that you do not exceed any limits.

1 Like

Just FYI and way too late: yes, AFAIK the wraparound is by design for unsigned.

I appreciate your patience in explaining the things in details to me. Thanks a ton!

After your last reply, I tried to generate the 3D-Histogram on three machines of different configurations. Every time it crashed, not generating 3D-Histogram. The details are given below:

[1] Machine 1: 32-bit Fedora20, RAM: 4GB, ROOT 6.14/00 (linux)
With “CubDim = 512”, the ROOT crashed with the report attached earlier. While, as per the calculations at least 518MB is required to run the macro.

[2] Machine 2: 64-bit Fedora18, RAM: 8GB, ROOT 5.34/09 (linuxx8664gcc)
I can generate a cube with “CubDim = 512”. However, when I changed “CubDim” to 800, I got the following error:
Error in TBufferFile::WriteByteCount: bytecount too large (more than 1073741822).

With even larger “CubDim” (=1024), the ROOT crashed with the following report:
CrashReport_Root5.txt (6.9 KB)

[3] Machine 3: 64-bit Fedora20, RAM: 32GB, ROOT 6.15/01 (linuxx8664gcc)
Works with “CubDim = 512”. But, with “CubDim = 800”, again it throws error:
Error in TBufferFile::WriteByteCount: bytecount too large (more than 1073741822)

While “CubDim = 1024” resulted in a crash. See the crash report here:
CrashReport_Root6_15.txt (10.8 KB)

It is even okay if use of TH3S or TH3I types can resolve the issue.

These are two separate issues.

[1] On a 32-bit system, the virtual memory address-space which a single process (i.e. your running “root”) can use is usually limited to around 2GB (combined machine code, data and shared libraries). So, you will hit this limit very soon. Try to run “top” in a separate terminal window and observe e.g. the “VIRT” and “%MEM” fields.

[2] and [3] On a 64-bit system you can usually use as much RAM (plus “swap space”) as your system provides. However, you meet here a 1GB ROOT TFile buffer/basket limit. AFAIK, the data size for any single entity, which you try to save / retrieve, is limited to 1GB minus 2 bytes (i.e. 1073741822 bytes, I guess @pcanal may say something more precise / up-to-date). For a “cubic” TH3F, this would be nbins_x = nbins_y = nbins_z <= 643 (or 509). Note that, even if you used a TH3C (histograms with one byte per cell so with maximum cell content = 127), for a “cubic” histogram you would still be limited to nbins_x = nbins_y = nbins_z <= 1021 (or 810).

1 Like

So what is the way out?

I forgot to mention in my earlier post that:
[1] Machine 1 has 4 cores
[2] Machine 2 has 8 cores &
[3] Machine 3 has 24 cores.
If that would be of any use!

I was going through Creating TH3 larger than 810x810x810, where it is mentioned that THnSparse can be used to work around this issue. But I don’t know how? Is it easy to modify “gen_Cub.cxx” to include THnSparse?

Ajay

See the THnSparse class reference and the source code files returned by:

grep -l -r THnSparse ${ROOTSYS}/t[eu]*

gen_Cub.cxx (5.1 KB)

1 Like

Now, I am left with no words to express my gratitude to you. I owe you a lot for the way you have smoothly taken me from my first post to this point, where I find myself more confident of using ROOT. My objective of analyzing multifold gamma-ray coincidence data using ROOT can now be easily achieved.

I know I have touched only tip (or maybe even smaller portion) of the iceberg. I will keep exploring ROOT and will get back to you.

Thank you once again for all your help and suggestions that I have received over the past 2 weeks.

BTW, what is the best way to acknowledge help from experts so that everyone on ROOT Forum will know about your time & sincere efforts?

Ajay

Hello again!
Can you please do me one more favor? I want to project THnSparse to generate various 1D histograms based on selection of gamma-ray energies.

Let’s assume that we have a symmetric 3D THnSparse. Now, I want to generate 1D histogram if two gamma-rays are in coincidence. For example, let us assume that the bin range for 1st gamma ray is 240 - 250 (X-Axis Range); and for 2nd gamma ray the range is 385 to 395 (Y-Axis Range). I am interested in generating a histogram corresponding to Z-axis if “(X-Axis Range) && (Y-Axis Range)”. This histogram will contain gamma rays which are in coincidence with the BOTH.

I read some examples and went through tutorials on THnSparse, but it is not clear to me how to proceed.

BTW, “TH2D* h2proj = his3D->Projection(1, 0)” gives to 2D histogram. Is it possible to generate such 2D histograms by selecting a particular gamma ray (or zmin, zmax range) on the Z-axis?