Help on Method/s to create Tree from multi-fold gamma-ray coincidence data

I will be happy to share all the required information with you. But please note that the data files are of 300MB. Can you suggest me a way to share? Can I simply upload it here?

Hello,

I am sharing a link to a .tgz file which contain all the information needed. But please forgive me for my poor coding techniques.

Regards,

Ajay

Ajay,

I think the best way is to try to take your code which reads your binary format and marry it with RDataFrame. More in detail, the skeleton could be:

auto N = 123 ; // <- Here the number of events
ROOT::RDataFrame d(N);
d.Define("Energy", yourFunctionReturningEnergy).Define("Time", yourFunctionReturningTime).Snapshot("mytree", "mydataset.root");

Cheers,
Danilo

Danilo,

Thank you.

But I am looking for a simple and traditional way of generating a Tree. As you might have noticed from the code that - I am already extracting the information and filling histograms as follows:

(1) Filling timing spectra : his_TDC[n]->Fill(CLT[n]);
(2) Filling energy spectra: his_AB[n]->Fill(Eclab[n]);
(3) Getting multiplicity with: clov_mult = cl_mult.count();

I am not sure how to use RDataFrame, and also won’t be able to spend time on it at the moment.

Regards,

Ajay

Are you able to open the files?

The attached source code shows how you can save / retrieve “clovers” in / from a TTree (note: if you want a “non-zero-suppressed” TTree, add -D_ZERO_SUPPRESSION_LEVEL_=0 when compiling “canSort_new.cxx”).

It seems that ROOT 5.34 (using the ZLIB level 1 by default) achieves about three times better compression than ROOT 6.14 (using the LZ4 level 4 by default) without any significant difference in execution times so, with ROOT 6 one should always remember to enforce the compression algorithm and its level (which is usually not needed with ROOT 5), e.g.:

f = new TFile(hisfile, "RECREATE", "", 101); // 101 = 100 * ROOT::kZLIB + 1

One can further improve the compression by another factor two if one enforces LZMA (unfortunately, this will also double the execution time but, it may really be worth to bear it), e.g.:

f = new TFile(hisfile, "RECREATE", "", 201); // 201 = 100 * ROOT::kLZMA + 1

Note: The “man lzma” says that the “compression preset level” can be “-0 … -9”. It seems to me that ROOT is unable to use the “-0” (i.e. the “–fast”) level. When one tries “compress = 200” (= 100 * ROOT::kLZMA + 0), one gets uncompressed ROOT files. Manual trials show that ROOT files compressed with “-0” are almost the same size as with “-1” but the time needed is around 20% shorter (at least). So, it would really make sense to modify ROOT so that it recognizes “compress = 200” as “LZMA @ -0” (or maybe one could set the “actual LZMA level” = “ROOT LZMA compression - 1”, so that “201” would be “-0”, “202” would be “-1” and so on).

canSort_new.cxx (45.6 KB)
canSortTTreeDump.cxx (2.8 KB)

1 Like

Your help and time is highly appreciated. It will certainly help me. I can see that you have been working on this all the time since I have posted the code :slight_smile:

Regards,

Ajay

Hello,

I was just wondering - why is there “&” before “clov_mult” and “No_Clover” when you define Branches, while it is missing in others.

May be it is a stupid question?

Ajay

That’s “simple variables” versus “arrays” (and anyhow, for these arrays, writing &array is equivalent to writing simply array, you can try if you want).

Note that in the above “canSort_new.cxx” source code _clov_mult_ = clov_mult; so I could use the original clov_mult variable everywhere. But I decided to introduce a separate _clov_mult_, just for the TTree, in case you wanted to change its value / meaning in future (so that the original clov_mult could remain unchanged).

1 Like

Okay. Thank you once again.

Wow! That’s great.

I have modified the code so that the histograms are filled in a specific way. I am attaching the modified version of the code, just in case you plan to work on it further.

Thank you very much.

can2root.cxx (43.7 KB)

I think we have a small problem:

I have compiled the code with -D_ZERO_SUPPRESSION_LEVEL_=2, and then ran it to generate a root file.
After this, when I dumped “non-zero” entries only, I still get zeros as shown below:

1 : 1 : 1 : ( 8 : 0 , 458 ) <======== here CLT[8] = 0

75 : 1 : 2 : ( 2 : 2885 , 0 ) ( 12 : 2902 , 245 ) <======== here Eclab[2] = 0

Ideally, all those events are valid in which both CLT[j] and Eclab[j] are non-zero.

Attaching, “modified” version of the code again.

Ajay

can2root.cxx (43.7 KB)

You seem to miss the point … see what’s after the line #if (_ZERO_SUPPRESSION_LEVEL_) > 1.

If you want to require that both “CLT” and “Eclab” are non-zero, replace the line (“pass” if non-zero “CLT” OR “Eclab”):

if ((CLT[n] == 0) && (Eclab[n] == 0)) continue;

with (“pass” if non-zero “CLT” AND “Eclab”):

if ((CLT[n] == 0) || (Eclab[n] == 0)) continue;

BTW. Note that “clov_mult” seems to be strictly related to non-zero “Eclab” only (i.e. it doesn’t care about “CLT”).

You are perfectly right! Not including “CLT” while evaluating “clov_mult” was logically wrong, however won’t create any artifact in histograms. Nevertheless, I have further modified the code to take care of this issue. The modified code is attached.

I have also checked entries in a gamma-gamma matrix created using the sorting code and from tree. They are matching exactly. This confirms the reliability of the usage of tree for further data analysis. I am also attaching the macro which I used to create 2D gamma-gamma histogram from the tree.

BTW, how big a tree can be in size? Is there any limitation?

Further inputs to improve the code and/or usage of ROOT are highly appreciated.

Thanking you.

Ajay

can2root.cxx (43.4 KB)

canSortTTreeDump.cxx (3.7 KB)

Attached is a slightly improved source code (includes fixes for TSpectrum2 incompatibilities between ROOT 5 and 6).

BTW. There are hundreds of events which have “CLT” of about 1.844e+19. This seems to me some kind of an “overflow” so, I would remove these events.

can2root.cxx (43.8 KB)

1 Like

Attached is a code which will take care of the TDC "overflow"issue while generating tree. Also, it will identify the number of occurrences when CLT[n] has overflowed.

Today, whole day, I spent understanding a weird (or is it normal?) behavior described below:

Please take a look at line no. 1064 of the attached code. If there is “.0” after “8192” it results in a histogram wherein all the entries with negative value of “(CLT[j] - CLT[k])” are missing i.e. it gives only “right” part of the TAC spectrum. However, this problem is resolved if I remove “.0” and then the spectrum is symmetric as expected. This problem was NOT there when I was sorting the data in the RADWARE format.

Do you know where is the problem?

You can run the code as follows:

./can2root IITR_Eu152_10Jul16_2.001 test.root
“y”
“1”
“n”
“n”
“test_10Jul_2.001.dat”

Then have a look at “TACspectra”

can2root.cxx (43.9 KB)

It seems to me that you are challenging C/C++ math here.
All relevant variables (i.e. “Time_Diff” and “CLT[16]”), are “unsigned long” (which means non-negative values only).
So, “(CLT[j] - CLT[k])” is fine as long as “CLT[j] >= CLT[k]”.
I do not know now if the C/C++ standard says how its math is expected to behave when “CLT[j] < CLT[k]” (maybe @pcanal and/or @Axel know it).

BTW. That’s possibly also why you have these 1.844e+19 “overflows” sometimes. If one interprets this “unsigned long” value as a “signed long” value, one gets a small negative number (unsigned long 18446744073709551615 = hex 64bits 0xffffffffffffffff = signed long -1).

{
  unsigned long CLT[2] = {1, 2}; // try with {1, 2} and then with {2, 1}
  std::cout << "CLT[0] = " << CLT[0] << std::endl;
  std::cout << "CLT[1] = " << CLT[1] << std::endl;
  unsigned long Time_Diff;
  Time_Diff = (CLT[0] - CLT[1]);
  std::cout << "(CLT[0] - CLT[1]) = " << Time_Diff << std::endl;
  Time_Diff = double(CLT[0] - CLT[1]);
  std::cout << "double(CLT[0] - CLT[1]) = " << Time_Diff << std::endl;
  Time_Diff = 1000 + (CLT[0] - CLT[1]);
  std::cout << "1000 + (CLT[0] - CLT[1]) = " << Time_Diff << std::endl;
  Time_Diff = 1000.0 + (CLT[0] - CLT[1]);
  std::cout << "1000.0 + (CLT[0] - CLT[1]) = " << Time_Diff << std::endl;
  Time_Diff = (CLT[1] - CLT[0]);
  std::cout << "(CLT[1] - CLT[0]) = " << Time_Diff << std::endl;
  Time_Diff = double(CLT[1] - CLT[0]);
  std::cout << "double(CLT[1] - CLT[0]) = " << Time_Diff << std::endl;
  Time_Diff = 1000 + (CLT[1] - CLT[0]);
  std::cout << "1000 + (CLT[1] - CLT[0]) = " << Time_Diff << std::endl;
  Time_Diff = 1000.0 + (CLT[1] - CLT[0]);
  std::cout << "1000.0 + (CLT[1] - CLT[0]) = " << Time_Diff << std::endl;
}
1 Like

Oh! I see.
Anyway, thank you for clearing my doubt, and all the help. I owe a lot to you for your guidance, support and teaching me several aspects of ROOT.

I want to move a small step forward with the data analysis. I have written a small macro (attached) to generate a “gamma-gamma” matrix from a ROOT Tree. This works great for a single ROOT file.

But, as I have mentioned in my very first post that I have about 100 data files which I cannot convert into a single root file. Hence, I will be having multiple ROOT files with same Tree structure, which I want to use further to generate “single” 2-D gamma-gamma histogram using the attached macro.

Can you please help/guide me in this? Let’s assume that I have 2 ROOT files (attached).

Regards,

Ajay

gen_MatGG.cxx (2.9 KB)

ROOT File1: https://www.dropbox.com/s/lsafwzxc0lkqgrf/test1.root?dl=0
ROOT File 2: https://www.dropbox.com/s/taiavbi15km3ukl/test2.root?dl=0

In general, you could create as many ROOT files with histograms as you have ROOT files with trees and then “combine” your histograms using hadd (run hadd -? for help).

You can also use a TChain to analyse all your ROOT files with trees (and create a single ROOT file with histograms).

gen_MatGG.cxx (3.2 KB)

Note that the “his2D” histogram is filled two times with every pair of “Eclab” values. So, maybe after the loops you should execute his2D->Scale(0.5); or you could simply use his2D->Fill(x, y, 0.5); instead.

BTW. See also: