Corrupting data when Updating file

Hi,

I am trying to UPDATE root files produced by Geant4 runs. From a Ntuple of this root file I produce many histograms of interest and then would like to write them in the file. I thus create folder in the root file to put them. The idea is also to loop over all the files but I am currently stuck at not corrupting the files.
My main issue is when using the TFile(filename, “UPDATE”), my Ntuple is corrupted after the run of my macro. If I try running the macro again I get
Error in TBranch::GetBasket: File: config300_alpha_6MeV_nucl (copy).root at byte:0, branch:row_wise_branch, entry:0, badread=0, nerrors=1, basketnumber=0

and then:
Trying to access a pointer that points to an invalid memory address…
Execution of your code was aborted.
In file included from input_line_8:1:
/home/f069g735/Desktop/essai/Write_histos2.C:92:5: warning: invalid memory pointer passed to a callee: hParticle[hpart]->Fill(step,edep);

If I open the Ntuple file through TBrowser I get the same GetBasket error but still can perform the automatic draw.

Here are the file opened in the macro (not corrupted) and my code:
Write_histos2.C (3.9 KB)

Could you help me understand what I am doing wrong?

Thank you in advance!

Florian

2 majors problems:

   TNtuple* ntuple = (TNtuple*)myfile->Get("Landau");

however the “Laundau” object is ‘only’ a TTree. So use:

   TTree* ntuple = (TTree*)myfile->Get("Landau");

And the fatal one:

   const Int_t nLayers = 16; //number of layers
   const Int_t nMicrons= 300; //number of slices
   const Int_t nParticles=50000; // nb of particles
....
   TH1F* hParticle[nParticles*nLayers-1];

The array is too big and silenty destroy the stack.
Use

   TH1F** hParticle = new TH1F*[nParticles*nLayers-1];

(don’t forget to delete it afterward or better yet use an std::vector.

Cheers,
Philippe.

Hi,

Thanks for the reply.
I modified following your recommendation.
However my main issue is still happening, my codes corrupts the input data (the TTree) if I use “UPDATE”.

Also, indeed the structure is TTree but Geant use Ntuples. I used the Ntuple class previously since it was the usual definition used in few examples of Plot.C macros provided by Geant.
No changes seems to affect the code but I corrupt the my data.

I would be really grateful if you could help me more.
Thanks!

indeed the structure is TTree but Geant use Ntuples.

I am not sure what Geant calls a ‘Ntuples’ but their store it as a TTree.

  KEY: TTree	Landau;1	Edep and TrackLength

static casting to a TNtuple will at best be ‘ignore’ (because the virtual table will still be that of a TTree) and at worse lead to random behavior and crashes.

I modified following your recommendation.
However my main issue is still happening, my codes corrupts the input data (the TTree)

Are the symptoms exactly the same or slight different?

Indeed the file is odd … It reports that it was written by ROOT v4 … but more importantly it is self inconsistent. It claims that the original file name was 122 characters longs when in fact it was only 43 characters long (“Result/alpha/config300_alpha_6MeV_nucl.root”). The extra 80 characters means that TFile inadvertently overwrite the first basket when updating its header.

A simple work around is to rewrite the file:

hadd -f config300_alpha_6MeV_nucl_v6.root config300_alpha_6MeV_nucl.root

The resulting file will have self consistent information and be updatable.

Hi,

Indeed at creation the files are created in folder Result/alpha by Geant. I would have thought that moving them for process would make no issues…

I tried exactly your command in the command line but I am faced with the following errors

Blockquote
root [0] hadd -f config300_alpha_6MeV_nucl_v6.root config300_alpha_6MeV_nucl.root
ROOT_prompt_0:1:8: error: expected ‘;’ after expression
hadd -f config300_alpha_6MeV_nucl_v6.root config300_alpha_6MeV_nucl.root
^
;
ROOT_prompt_0:1:9: error: use of undeclared identifier ‘config300_alpha_6MeV_nucl_v6’
hadd -f config300_alpha_6MeV_nucl_v6.root config300_alpha_6MeV_nucl.root

Did I miss something?

Thanks

Also on the following point:

As of now I have the following code Write_histos2.C (4.9 KB)
which read the initial file and write a new one copying all the previous data to it. This is not optimal but I am also facing the issue with the deletion/ clearing.

As I have several files to process I wanted to check by performing the code twice on the same file but I am faced with:

  • double free error in the TH1F** case,
  • with a vector not cleared in the vector case and I am not sure how to clear it. Also when using vectors of TH1F* I get this error when running a second time the same script:

IncrementalExecutor::executeFunction: symbol ‘ZSt34__uninitialized_move_if_noexcept_aIPP4TH1FS2_SaIS1_EET0_T_S5_S4_RT1’ unresolved while linking [cling interface function]!
You are probably missing the definition of TH1F** std::__uninitialized_move_if_noexcept_a<TH1F**, TH1F**, std::allocator<TH1F*> >(TH1F**, TH1F**, TH1F**, std::allocator<TH1F*>&)
Maybe you need to load the corresponding shared library?
IncrementalExecutor::executeFunction: symbol ‘_ZSt8_DestroyIPP4TH1FS1_EvT_S3_RSaIT0_E’ unresolved while linking [cling interface function]!
You are probably missing the definition of void std::_Destroy<TH1F**, TH1F*>(TH1F**, TH1F**, std::allocator<TH1F*>&)
Maybe you need to load the corresponding shared library?
IncrementalExecutor::executeFunction: symbol ‘ZN9__gnu_cxxmiIPP4TH1FSt6vectorIS2_SaIS2_EEEENS_17__normal_iteratorIT_T0_E15difference_typeERKSA_SD’ unresolved while linking [cling interface function]!
You are probably missing the definition of __gnu_cxx::__normal_iterator<TH1F**, std::vector<TH1F*, std::allocator<TH1F*> > >::difference_type __gnu_cxx::operator-<TH1F**, std::vector<TH1F*, std::allocator<TH1F*> > >(__gnu_cxx::__normal_iterator<TH1F**, std::vector<TH1F*, std::allocator<TH1F*> > > const&, __gnu_cxx::__normal_iterator<TH1F**, std::vector<TH1F*, std::allocator<TH1F*> > > const&)
Maybe you need to load the corresponding shared library?

In the code provided above I have both solutions, vector solution not commented and TH1F** solution commented.
As you suggested those changes could you explain me what I am doing wrong in those implementations?

Thank you in advance for your help

hadd is a command line tool :slight_smile: Call it from your shell (outside of root.exe).

Indeed at creation the files are created … by Geant.

Right … it sounds like the Geant library that re-implement ROOT I/O has a bug in the way it writes the keys (fields ‘fNbytesName’ being way longer that it is supposed to be). Once we are fully done here, you may want to report the deficiency to them …

1 Like

To rerun a script (after its modifications) you would need v6.20.

something like:

for(auto ptr : hParticle)
   delete ptr;
hParticle.clear();

humm … you already do the delete, so just the hParticle.clear()

1 Like

Well It seems it works perfectly!
Thank you very much for your help!:clap:
I will try and see on Geant Forum if my issue is due to my coding or some issues of Geant in encoding root files (which I doubt).

Thanks!

some issues of Geant in encoding root files (which I doubt).

The main problem IS within the encoding of the file meta-data done during writing and thus likely in their (I doubt many people update ROOT files generate by Geant)

by the way, which Geant version are you using?

Geant4 10.5
Which is not the latest one that just came out.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.