Segmentation violation when using TH1D and ofstream

I encountered a difficult problem while using ROOT to bin data and export it to a txt file. I repeatedly encountered a “segmentation violation” error. I have checked and confirmed that there are no errors in the TFile::open and TTree retrieval processes. During the process of obtaining the bin center and bin height, I can successfully retrieve and write them for smaller files, but encounter an error when exiting. For larger files, it crashes after writing a segment with the same error. I cannot understand the cause of this problem. The code I am using is as follows:

Original code:

#include <TFile.h>
#include <TTree.h>
#include <TH1D.h>
#include <TF1.h>
#include <TCanvas.h>
#include <TGraph.h>
#include <fstream>

void genHistogram()
    const char* fileNames[] = {"Be7.root","pep.root","N13.root","O15.root","Pb210.root","U238.root","Th232.root","Pb210.root","K40.root","Kr85.root","C10.root","C11.root","He6.root"};
    for (int i = 0; i < sizeof(fileNames) / sizeof(fileNames[0]); i++) {
        TFile* file = TFile::Open(fileNames[i]);
        if (!file || file->IsZombie()) {
            printf("Error opening file: %s\n", fileNames[i]);
        TTree* tree = (TTree*)file->Get("evt");
        if (!tree) {
            printf("TTree 'evt' not found in file: %s\n", fileNames[i]);
        TH1D* hist = new TH1D("hist", "depositenergy", 200, 0, 3);
        double N = hist->Integral();
        string outputname = "output_"+string(fileNames[i])+".txt";
        ofstream output(outputname);
        for(int j =1; j<=hist->GetNbinsX()-1;++j){
            output<<hist->GetBinCenter(j)<<" "<<hist->GetBinContent(j)<<endl;
        delete hist;
        delete tree;

Error Message:

 *** Break *** segmentation violation

There was a crash.
This is the entire stack trace of all threads:
#0  0x00007f60279e460c in waitpid () from /lib64/
#1  0x00007f6027961f62 in do_system () from /lib64/
#2  0x00007f60284cb3fc in TUnixSystem::StackTrace() () from /usr/lib64/root/
#3  0x00007f60284cd31a in TUnixSystem::DispatchSignals(ESignals) () from /usr/lib64/root/
#4  <signal handler called>
#5  0x00007f6027ce6b98 in main_arena () from /lib64/
#6  0x00007f6028c58767 in ?? ()
#7  0x0000000001ef0cf0 in ?? ()
#8  0x00007f6028a8d9ea in _dl_runtime_resolve_xsave () from /lib64/

The lines below might hint at the cause of the crash.
You may get help by asking at the ROOT forum
Only if you are really convinced it is a bug in ROOT then please submit a
report at Please post the ENTIRE stack trace
from above as an attachment in addition to anything else
that might help us fixing this issue.
#5  0x00007f6027ce6b98 in main_arena () from /lib64/
#6  0x00007f6028c58767 in ?? ()
#7  0x0000000001ef0cf0 in ?? ()
#8  0x00007f6028a8d9ea in _dl_runtime_resolve_xsave () from /lib64/

ROOT Version: 6.24/08
Platform: CentOS
Compiler: just root

Hi @KevinH ,

I think the problem is that the TFile takes ownership of the trees and histograms that are created after it. So file->Close deletes them. So you don’t need to delete them explicitly.

You can verify this is what is happening by running the reproducer under valgrind after compiling the repro with debug symbols (and ideally also using a ROOT build that has debug symbols).


more details about TFile ownership at Object ownership - ROOT

however you do need to call delete file, otherwise you are leaking a (closed) TFile object.


you can make all the memory management automatic by using a unique_ptr for the TFile and allocate the histogram on the stack:

std::unique_ptr<TFile> file(TFile::Open(fileNames[i]));
/* ... */
// tree is owned by file
TTree* tree = file->Get<TTree>("evt");
/* ... */
TH1D hist("hist", "depositenergy", 200, 0, 3);
/* ... */
// no need to call file->Close (it's called by its destructor)
// no need to `delete tree` (it's owned by the file which will take care of destructing it)
// no need to `delete hist` (it's stack-allocated)

Thanks for your help! I have fixed it and that is exactly the problem.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.