I’m having trouble reading a file that is more than 2GB in size. I found some users who’d reported a similar error, but I haven’t been able to find their solutions helpful. I’m running ROOT 6.06 that I compiled myself, and I have a root file as an output from Geant4. I used hadd to combine 16 individual root files from each Geant4 worker thread to the one final Root file that is 2.82 GB in size.
When I open the TBrowser and try to view the histogram of any tuples I have saved in the file, I get the following error:
Error in <TBranch::GetBasket>: File: gammabgmicro.root at byte:-2147381916, branch:Energy_keV, entry:64530902, badread=0, nerrors=1, basketnumber=16143
Error in <TBranch::GetBasket>: File: gammabgmicro.root at byte:-2147381916, branch:Energy_keV, entry:64530910, badread=0, nerrors=9, basketnumber=16143
file probably overwritten: stopping reporting error messages
===>File is more than 2 Gigabytes
I observe the same error trying to read the data in with PyROOT, where I am trying line by line to access the tuple and place it into an array in memory.
It seems that a 32 bit integer might be looping around to stop me from reading beyond a certain limit (As the byte is suspiciously close to -2^31). Am I missing something when trying to read files larger than 2GB in ROOT, do I need to change a compilation flag to read things with 64 bit support? When I call TFile->GetVersion() I get 1060414, which I expected should have meant the file was opened with support for large files enabled, but I am not familiar enough with the Root internals to be sure.
Or should I just try and use smaller files?
Thanks in advance for your help.
You could try to create a TChain with all your small files, instead of merging them into a single big one.
files larger than 2GB are supported since a very long time.
On what os are you running? Is it a 32 bit version? How did you build ROOT (configuration string)?
Thanks for the quick replies. I’m running on 64-bit Kubuntu, I built root through the cmake GUI interface using the standard build options with built-in freetype. I’m wondering if I missed selecting something here?
I’ve tried opening the same file on a CentOS 7 Virtiual machine and again, opening the file in the TBrowser gives the same error when you try to read ntuples. I used the virtual machine provided by the CENBG here geant4.in2p3.fr/spip.php?rubrique8
I’ve noticed this across multiple root files generated in the same way (Geant4 Multithreaded -> hadd -> file bigger than 2GB).
Happy to run any diagnostics you could suggest.
I am not familiar with the VM you mention.
On the other hand it’s unexpected to experience this problem on kubuntu 64 bits.
Could you share the file so that we can inspect it? The error “file probably overwritten: stopping reporting error messages” must be understood as it is prompted in presence of repeated errors during basket retrieval.
Is the error present only after hadding the files or also reading them individually, e.g. via a TChain?
Sorry it took a while to get back to you, I had some difficulties uploading the file. Here is the root file I am having difficulty reading:
drive.google.com/file/d/0Bz7mIj … sp=sharing
I am unable, for example, to open the file in the TBrowser, go to the tuples directory, and read entries in the CellInterior directory.
Thanks for your help.
In addition to the resulting file, could you post the original files (as this is likely a flaw in either the file generation or hadd). In addition doing the following test would be interesting: Is the file resulting from hadd each individual input files multiple times to itself result in all the files being corrupted? For example if you input files are a1.root, a2.root, etc… and each one is .5GB doing:
hadd -f fullfile.root a1.root a2.root a3.root a4.root a5.rootresult in the failing file you posted and the question is whether if one or more of the files onlya?.root fails:
hadd -f onlya1.root a1.root a1.root a1.root a1.root a1.root
hadd -f onlya2.root a2.root a2.root a2.root a2.root a2.root
hadd -f onlya3.root a3.root a3.root a3.root a3.root a3.root
hadd -f onlya4.root a4.root a4.root a4.root a4.root a4.root
hadd -f onlya5.root a5.root a5.root a5.root a5.root a5.root
Typically I delete the component files after hadd. I’ll generate some more data and run the test you suggest, but I likely won’t have the files done until tomorrow or Friday. I’ll post them as soon as I have them.
Thanks again for your time
I’ve tried what you suggested and found the same error. I ran a different simulation this time to see if the problem related to my simulation but I found the same error.
I ran my simulation on 16 threads. Then, test 1:
This produces a combined file that causes the 2GB read error when the last tuple is loaded (interestingly, the first tuple reads fine, probably because it is at positions less than 2GB into the file.)
Test 2: Make a 2.6GB file via:
Same error occurs
Same error occurs, I verified that simulation_t2.root can be read correctly by itself.
Here are links to onlyt2.root and the simulation file (macro_t2.root), which should work to provide a minimum reproduction of the problem. If it helps, a colleague commented to me yesterday that he recently noticed the same problem with a file made in Geant4 larger than 2GB.
onlyt2.root: drive.google.com/file/d/0Bz7mIj … sp=sharing
macro_t2.root: drive.google.com/file/d/0Bz7mIj … sp=sharing
Thanks for your help.
Is there an update here on what is causing the error? Another colleague is reporting the same error on ROOT v5.34/32 whenever their ntuples exceed 2GB.
I am able to reproduce this problem. I am investigating.
There is something ‘fishy’ about this files (some internal state is not as expected for any files produced by a version of ROOT newer than v4-01-02 (circa 2004+)).
Can you let me know how you configure Geant4 and how to produce those files?
Thanks for checking that out. As far as I know I am running Geant4 with no modifications in the Root settings. My colleague who found the same error compiled Geant4 independently to me and as far as I know, he also doesn’t try to do anything out of the ordinary.
In both cases, we have run Geant4 and then saved a significant quantity of data in ntuples (rather than histograms), following standard Geant4 procedures.
As best as I can work out the relevant class for defining a root file in the Geant4 analysis suite is this one:
www-geant4.kek.jp/lxr/source/ana … wroot/file
And the TKey is defined in this file:
www-geant4.kek.jp/lxr/source/ana … /wroot/key
I’m not entirely sure what to look for, but as far as I can see, it looks like the file code only works for ROOT files in their <2GB form. Here is the summary of a ROOT file structure as found in g4tools.
// A ROOT file is a suite of consecutive data records with the following
// format (see also the TKey class);
// TKey ---------------------
// byte 1->4 Nbytes = Length of compressed object (in bytes)
// 5->6 Version = TKey version identifier
// 7->10 ObjLen = Length of uncompressed object
// 11->14 Datime = Date and time when object was written to file
// 15->16 KeyLen = Length of the key structure (in bytes)
// 17->18 Cycle = Cycle of key
// 19->22 SeekKey = Pointer to record itself (consistency check)
// 23->26 SeekPdir = Pointer to directory header
// 27->27 lname = Number of bytes in the class name
// 28->.. ClassName = Object Class Name
// ..->.. lname = Number of bytes in the object name
// ..->.. Name = lName bytes with the name of the object
// ..->.. lTitle = Number of bytes in the object title
// ..->.. Title = Title of the object
// -----> DATA = Data bytes associated to the object
// The first data record starts at byte fBEGIN (currently set to kBegin)
// Bytes 1->kBegin contain the file description:
// byte 1->4 "root" = Root file identifier
// 5->8 fVersion = File format version
// 9->12 fBEGIN = Pointer to first data record
// 13->16 fEND = Pointer to first free word at the EOF
// 17->20 fSeekFree = Pointer to FREE data record
// 21->24 fNbytesFree = Number of bytes in FREE data record
// 25->28 nfree = Number of free data records
// 29->32 fNbytesName = Number of bytes in TNamed at creation time
// 33->33 fUnits = Number of bytes for file pointers
// 34->37 fCompress = Zip compression level
Does this help or is there anything else to check? If the issue isn’t in ROOT, I’ll raise the problem in the Geant4 developer’s forums.
The problem is in http://www-geant4.kek.jp/lxr/source/analysis/g4tools/include/tools/wroot/basket where the author did not import the fact that fVersion (m_version in his case) is to always be more than 1000 for baskets. (and is missing the corresponding code/support in write_on_file)
This tweak means that the baskets are then always in the ‘file-larger-than-2Gb’ mode and can easily be copied (without decompression, unstreaming, streaming nor re-compression) from one file to another file.
Non-trivial code could be added to ROOT to support this case, but I’ll rather avoid adding this complexity if possible.
Note that the file can be fixed and/or merged into files larger than 2GB by using hadd ‘slow’ merging method.[code]
hadd -O -f macro_t2_fixed.root macro_t2.root
hadd -O -f output.root onlyt2.root macro_t2.root macro_t2.root macro_t2.root
[/code](The -O request to re-optimize the basket size and thus force the code to go through decompression, unstreaming, streaming nor re-compression, the resulting file is slightly smaller in your case).
Thanks for the feedback. I can confirm that merging with the -O option does work.
wroot_fix.tar.gz (3.31 KB)Hi,
The problem should be fixed with the update attached to this thread.
Could you, please, try it out (you have to untar the file in geant4 top directory and rerun make and make install) and let us know is the fix works ok in your use case?
The fix will be available in the next Geant4 version as well as
in the next patch to 10.2 release.
If you find any other incompatibilities of files written from Geant4 with standard ROOT file format, please, report the problem in Geant4 bug report system:
rather than in ROOT forum.