Help on Method/s to create Tree from multi-fold gamma-ray coincidence data

Hello,

Thank you very much for you for reply, much appreciated.

Please note that - I am intentionally filling “his2D” twice, so that it will be symmetric. That is the usual practice in gamma-ray spectroscopy. In the macro that you have attached, I have not understood why have you added “0.5” to Eclab[j] and Eclab[k] before filling “his2D”. Is it really required?

As a next step (and almost final one!) of the analysis is to generate 3D histogram (possibly symmetric as 2D), which I tried by modifying our earlier “gen_MatGG.cxx” macro to “gen_Cub.cxx” (attached). You will notice that I have reduced dimensions to 2048 channels. When I run the macro the ROOT crashes. Please find the detailed crash report in the attached file. I have checked - if I reduce the 3D-histogram dimensions to “500” the it works fine. But a 3D-histogram with only 500 channels is of no use to me. Further, if I increase dimensions even to “512” the ROOT throws following error:

Error in TRint::HandleTermInput(): std::bad_alloc caught: std::bad_alloc

What is going wrong? How to rectify this problem?

Currently I am using ROOT 6.14/00 on 32-bit Fedora 20.

Thanking you once again.

Ajay

gen_Cub.cxx (3.6 KB)

CrashReport.txt (7.6 KB)

I understand that you want to make your “his2D” histogram “symmetric”. I just say that you should maybe care about the “statistics” of your histogram, too (unless you do not need / use it afterwards at all). If you use the same pair twice, your “statistics” is artificially increased (doubled). So, you either should fill the histogram explicitly setting the “weight = 0.5” (i.e. his2D->Fill(x, y, 0.5);), or you should “scale” the histogram by a factor 0.5 afterwards (i.e. his2D->Scale(0.5);).

Well, I assume that your “Eclab” values come from some kind of an ADC converter so they are actually “discrete integer values”. I guess the real “average signal height” value for every ADC channel should be increased by the half of the ADC channel’s width (i.e. 0.5). This is not strictly needed of course. It’s up to you if you want to apply this correction.

There exists, however, some problem with bins’ edges calculation in ROOT (coming from floating point “rounding errors”) which you can easily overcome if you add some small “shift” to the integer ADC values. This “shift” does not need to be 0.5 of course. A small value like 1.e-6 (or even 1.e-9) would be sufficient:

double x = (Eclab[j] + 1.e-6), y = (Eclab[k] + 1.e-6);

In general, a TH3 histogram uses at least (nbins_x + 2) * (nbins_y + 2) * (nbins_z + 2) * bytes_per_cell bytes (or the double of this amount if TH1::Sumw2 is effective).
So, for a “cubic” TH3F (a “single-precision floating-point” value uses 4 bytes) and nbins_x = nbins_y = nbins_z = 2048 this would be at least 32GB (or 64GB) RAM.

However, because ROOT uses “Int_t” (32 bit signed integer) values for “global bin” numbers, you must make sure that (nbins_x + 2) * (nbins_y + 2) * (nbins_z + 2) <= 2147483647 (e.g. that would be nbins_x = nbins_y = nbins_z <= 1288).

That said, for CubDim = 512, your “his3D” will use at least 518MB (or 1.01GB) RAM.

Try to check:
[bash]$ ulimit -S -a
[bash]$ ulimit -H -a
[tcsh]$ limit
[tcsh]$ limit -h
and make sure that you do not exceed any limits.

1 Like

Just FYI and way too late: yes, AFAIK the wraparound is by design for unsigned.

I appreciate your patience in explaining the things in details to me. Thanks a ton!

After your last reply, I tried to generate the 3D-Histogram on three machines of different configurations. Every time it crashed, not generating 3D-Histogram. The details are given below:

[1] Machine 1: 32-bit Fedora20, RAM: 4GB, ROOT 6.14/00 (linux)
With “CubDim = 512”, the ROOT crashed with the report attached earlier. While, as per the calculations at least 518MB is required to run the macro.

[2] Machine 2: 64-bit Fedora18, RAM: 8GB, ROOT 5.34/09 (linuxx8664gcc)
I can generate a cube with “CubDim = 512”. However, when I changed “CubDim” to 800, I got the following error:
Error in TBufferFile::WriteByteCount: bytecount too large (more than 1073741822).

With even larger “CubDim” (=1024), the ROOT crashed with the following report:
CrashReport_Root5.txt (6.9 KB)

[3] Machine 3: 64-bit Fedora20, RAM: 32GB, ROOT 6.15/01 (linuxx8664gcc)
Works with “CubDim = 512”. But, with “CubDim = 800”, again it throws error:
Error in TBufferFile::WriteByteCount: bytecount too large (more than 1073741822)

While “CubDim = 1024” resulted in a crash. See the crash report here:
CrashReport_Root6_15.txt (10.8 KB)

It is even okay if use of TH3S or TH3I types can resolve the issue.

These are two separate issues.

[1] On a 32-bit system, the virtual memory address-space which a single process (i.e. your running “root”) can use is usually limited to around 2GB (combined machine code, data and shared libraries). So, you will hit this limit very soon. Try to run “top” in a separate terminal window and observe e.g. the “VIRT” and “%MEM” fields.

[2] and [3] On a 64-bit system you can usually use as much RAM (plus “swap space”) as your system provides. However, you meet here a 1GB ROOT TFile buffer/basket limit. AFAIK, the data size for any single entity, which you try to save / retrieve, is limited to 1GB minus 2 bytes (i.e. 1073741822 bytes, I guess @pcanal may say something more precise / up-to-date). For a “cubic” TH3F, this would be nbins_x = nbins_y = nbins_z <= 643 (or 509). Note that, even if you used a TH3C (histograms with one byte per cell so with maximum cell content = 127), for a “cubic” histogram you would still be limited to nbins_x = nbins_y = nbins_z <= 1021 (or 810).

1 Like

So what is the way out?

I forgot to mention in my earlier post that:
[1] Machine 1 has 4 cores
[2] Machine 2 has 8 cores &
[3] Machine 3 has 24 cores.
If that would be of any use!

I was going through Creating TH3 larger than 810x810x810, where it is mentioned that THnSparse can be used to work around this issue. But I don’t know how? Is it easy to modify “gen_Cub.cxx” to include THnSparse?

Ajay

See the THnSparse class reference and the source code files returned by:

grep -l -r THnSparse ${ROOTSYS}/t[eu]*

gen_Cub.cxx (5.1 KB)

1 Like

Now, I am left with no words to express my gratitude to you. I owe you a lot for the way you have smoothly taken me from my first post to this point, where I find myself more confident of using ROOT. My objective of analyzing multifold gamma-ray coincidence data using ROOT can now be easily achieved.

I know I have touched only tip (or maybe even smaller portion) of the iceberg. I will keep exploring ROOT and will get back to you.

Thank you once again for all your help and suggestions that I have received over the past 2 weeks.

BTW, what is the best way to acknowledge help from experts so that everyone on ROOT Forum will know about your time & sincere efforts?

Ajay

Hello again!
Can you please do me one more favor? I want to project THnSparse to generate various 1D histograms based on selection of gamma-ray energies.

Let’s assume that we have a symmetric 3D THnSparse. Now, I want to generate 1D histogram if two gamma-rays are in coincidence. For example, let us assume that the bin range for 1st gamma ray is 240 - 250 (X-Axis Range); and for 2nd gamma ray the range is 385 to 395 (Y-Axis Range). I am interested in generating a histogram corresponding to Z-axis if “(X-Axis Range) && (Y-Axis Range)”. This histogram will contain gamma rays which are in coincidence with the BOTH.

I read some examples and went through tutorials on THnSparse, but it is not clear to me how to proceed.

BTW, “TH2D* h2proj = his3D->Projection(1, 0)” gives to 2D histogram. Is it possible to generate such 2D histograms by selecting a particular gamma ray (or zmin, zmax range) on the Z-axis?

You could try (you can call TAxis::SetRange for any axis, of course):

TAxis *a2 = his3D->GetAxis(2); // the "Z" axis
a2->SetRange(a2->FindFixBin( zmin ), a2->FindFixBin( zmax ));
TH2D *h2proj = his3D->Projection(1, 0); // "Y" axis versus "X" axis
1 Like

Thank you. I have also successfully generated 1D Histogram by selecting Y- and Z- range as follows:

TFile *f = new TFile("test_cub.root")
TAxis *a = his3D->GetAxis(2);
a->SetRange(a->FindFixBin( 235 ), a->FindFixBin( 245 ));
TAxis *b = his3D->GetAxis(1);
b->SetRange(b->FindFixBin( 485 ), b->FindFixBin( 495 ));
TH1D *pjx = new TH1D();
pjx = his3D->Projection(0);
pjx->Draw()

Am I doing the right thing?

BTW, is it possible to write 3D THnSparse to 3D Histograms?

You should have:

TH1D *pjx = his3D->Projection(0);

See: TH3D *Projection (Int_t xDim, Int_t yDim, Int_t zDim, Option_t *option="") const

BTW. for the magic “option”, see the THnBase::Projection (@couet note that this description is currently missing in the THnBase::Projection “master” and moreover, there is no information where to find this description in the THnSparse::Projection).

The description is in the source if you click on 188 . It should be put in doxygen format.

After building almost all the tools required for multi-fold gamma-ray coincidence data analysis, today I tried to sort ~ 8 GB data (28 data files) into a tree (“canSort”) using “can2root.cxx”. This has generated a ROOT file “sixT2MeV.root” (3.2 GB). When I use this ROOT file later to create a symmetric 3D THnSparse (“his3D”) from “gen_Cub.cxx” with “CubDim = 4096”, I am getting the following warning:

Warning: MaxNbins reached for the his3D.

his3D->Print(“all”); gives

THnSparseT<TArrayF> (*0x19a0590): "his3D" "his3D"
  3 dimensions, 1.71278e+07 entries in 16777217 filled bins
    axis 0 "EclabX": 4096 bins (0..4096), variable bin sizes
    axis 1 "EclabY": 4096 bins (0..4096), variable bin sizes
    axis 2 "EclabZ": 4096 bins (0..4096), variable bin sizes

Does the above warning imply loss of the data/events?
It is not clear to me where and how to find description on limiting values of possible number of bins for THnSparse. I know, in our case it is CubDim * CubDim. But why?

Yes, this warning means that some data have been skipped.

I set “MaxNbins = CubDim * CubDim;” just to limit the amount of used RAM to something small (i.e. 64MB in this case) but, you can set whatever you want, e.g. “MaxNbins = 268435454;” (at least around 1GB RAM may be needed when a bin content is held by a Float_t).

More methods’ headers are now in the doxygen format.

Strange Behavior of different types of Projections:

I am trying to understand behavior and performance of TH3D and THnSparse. I have generated TWO different ROOT file: one using TH3D and another using THnSparse; with the same conditions. I used the 3D histogram (his3D) in them to generate 1D projection using different methods as follows:

==============================================================
METHOD1: (using TH3D Class)

TFile *f = new TFile("cub_his3D.root");
his3D->GetXaxis()->SetRangeUser(668,672);
his3D->GetYaxis()->SetRangeUser(449,453);
his3D->Project3D("z")->Draw();

This gives me 16566 entries in a resulting histogram.

METHOD2: (using TH3D Class)

TFile *f = new TFile("cub_his3D.root")
his3D->SetAxisRange(668,672,"X");
his3D->SetAxisRange(449,453,"Y");
TH1D *pjz1 = his3D->Project3D("Z");

This gives me 19268 entries in a resulting histogram.

METHOD3: (using THnSparse Class)

TFile *f = new TFile("cub_sparse.root");
his3D->GetAxis(0)->SetRangeUser(668,672);
his3D->GetAxis(1)->SetRangeUser(449,453);
TH1D *pjz = his3D->Projection(2);
pjz->Draw();

In this case the resulting 1D-histogram has ONLY 10663 entries.

METHOD4: (using THnSparse Class)

TFile *f = new TFile("cub_sparse.root");
TAxis *a0 = his3D->GetAxis(0);
TAxis *a1 = his3D->GetAxis(1);
a0->SetRange(a0->FindFixBin( 668 ), a0->FindFixBin( 672 ));
a1->SetRange(a1->FindFixBin( 449 ), a1->FindFixBin( 453 ));
TH1D *pjz = his3D->Projection(2);
pjz->Draw();

Now the total entries in the 1D-histogram are 12435.

==============================================================

You see, there is a problem! The entries vary from 10663 to 19268 depending on the method of projection and the Class Type. I get more counts in 2nd method. But the problem (as discussed in earlier posts) that I cannot create a TH3D with large dimensions. For the present exercise, I have limited the dimensions to 620 only.

Is the above behavior expected? Which one of the above methods should I trust?

For “METHOD1” and “METHOD2” you should use:

TFile *f = TFile::Open("cub_his3D.root");
TH3 *his3D; f->GetObject("his3D", his3D);

For “METHOD3” and “METHOD4” you should use:

TFile *f = TFile::Open("cub_sparse.root");
THnSparse *his3D; f->GetObject("his3D", his3D);

I’m not sure if the “number of entries” is what you want. I guess you’d better try with the TH1::Integral method.

Note also that, in any operations, entire bins are always taken into account so, if you want to compare numbers, you need to make sure that e.g. zaxis->GetBinLowEdge(zaxis->FindFixBin( zmin )) and zaxis->GetBinUpEdge(zaxis->FindFixBin( zmax )) (and so on) are equal for all histograms that you want to compare (btw. following the standard convention for numbering bins, the low-edge is included but the upper-edge is excluded).

Please compare input parameters of the TAxis::SetRange, the TAxis::SetRangeUser and the TH1::SetAxisRange methods.

Hi,
I have tried the methods that you have suggested. But still the problem persists. May be, I am not properly following what you said. Anyway, here is what I do:

======================

TFile *f = TFile::Open("cub_his3D.root");
TH3 *his3D; f->GetObject("his3D", his3D);

his3D->GetXaxis()->SetRangeUser(668,672);
his3D->GetYaxis()->SetRangeUser(449,453);
TH1D* pjz1 = (TH1D*)his3D->Project3D("z");
Int_t xlow = pjz1->GetXaxis()->GetBinLowEdge(pjz1->FindFixBin(100));
Int_t xhi = pjz1->GetXaxis()->GetBinUpEdge(pjz1->FindFixBin(700));
cout << pjz1->Integral(xlow,xhi) << endl;
12210

his3D->SetAxisRange(668,672,"X");
his3D->SetAxisRange(449,453,"Y");
TH1D *pjz2 = (TH1D*)his3D->Project3D("z");
Int_t xlow2 = pjz2->GetXaxis()->GetBinLowEdge(pjz2->FindFixBin(100));
Int_t xhi2 = pjz2->GetXaxis()->GetBinUpEdge(pjz2->FindFixBin(700));
cout << pjz2->Integral(xlow2,xhi2) << endl;
14186

======================

Just to exclude floating point “slips”, could you, please, try with “668.1, 671.9”, “449.1, 452.9”, “100.1” and “699.9” (btw. are you sure these ranges do not go into underflow or overflow bins?).