pyROOT TMultiGraph Segmentation Fault

I’ve run into a segmentation fault when using the TMultiGraph in python. Here is the smallest example that reproduces the error:

import numpy
import ROOT

def plot():
    energies = numpy.logspace(-3, 1, num=5)
    data1 = numpy.array([10,20,304,50,60])

    graph = ROOT.TMultiGraph()
    graph.Add(ROOT.TGraph(len(energies), energies, data1))

    #Addition of this line causes a seg fault.
    graph.GetListOfGraphs().At(0).SetTitle("data 1")

    canvas = ROOT.TCanvas("canvas")
    graph.Draw("AL")

#Removing these lines also seem to resolve the issue
if __name__ == "__main__":
    plot()

Produces this output:

$ python test.py                                                                                                                           
Segmentation fault: 11   

I also get this nice Problem Report which I have truncated as the last six lines repeat over and over again:

Process:               Python [66998]
Path:                  /opt/local/Library/Frameworks/Python.framework/Versions/2.7/Resources/Python.app/Contents/MacOS/Python
Identifier:            org.python.python
Version:               2.7.13 (2.7.13)
Code Type:             X86-64 (Native)
Parent Process:        bash [61674]
Responsible:           Python [66998]
User ID:               503

Date/Time:             2017-09-19 11:23:16.059 -0600
OS Version:            Mac OS X 10.12.6 (16G29)
Report Version:        12
Anonymous UUID:        [omitted]


Time Awake Since Boot: 2800000 seconds

System Integrity Protection: enabled

Crashed Thread:        0  Dispatch queue: com.apple.main-thread

Exception Type:        EXC_BAD_ACCESS (SIGSEGV)
Exception Codes:       KERN_PROTECTION_FAILURE at 0x00007fff5a57eff4
Exception Note:        EXC_CORPSE_NOTIFY

Termination Signal:    Segmentation fault: 11
Termination Reason:    Namespace SIGNAL, Code 0xb
Terminating Process:   exc handler [0]

VM Regions Near 0x7fff5a57eff4:
    MALLOC_SMALL           00007fccc1000000-00007fccc1800000 [ 8192K] rw-/rwx SM=PRV  
--> STACK GUARD            00007fff56d7f000-00007fff5a57f000 [ 56.0M] ---/rwx SM=NUL  stack guard for thread 0
    Stack                  00007fff5a57f000-00007fff5ad72000 [ 8140K] rw-/rwx SM=COW  thread 0

Thread 0 Crashed:: Dispatch queue: com.apple.main-thread
0   libsystem_malloc.dylib        	0x00007fffbb8f830c szone_malloc_should_clear + 42
1   libsystem_malloc.dylib        	0x00007fffbb8f8282 malloc_zone_malloc + 107
2   libsystem_malloc.dylib        	0x00007fffbb8f7200 malloc + 24
3   libc++abi.dylib               	0x00007fffba384e0e operator new(unsigned long) + 30
4   libCore.so                    	0x0000000106709857 TList::MakeIterator(bool) const + 23 (TList.cxx:970)
5   libCore.so                    	0x00000001067070e6 THashList::RecursiveRemove(TObject*) + 70 (TCollection.h:232)
6   libCore.so                    	0x000000010669c6cc TObject::~TObject() + 76 (TObject.cxx:93)
7   libHist.so                    	0x0000000108818b1b TMultiGraph::RecursiveRemove(TObject*) + 27 (TMultiGraph.cxx:1450)
8   libCore.so                    	0x0000000106709a8d TList::RecursiveRemove(TObject*) + 77 (TList.cxx:742)
9   libGpad.so                    	0x0000000108cbff31 TPad::RecursiveRemove(TObject*) + 145 (TPad.cxx:5154)
10  libCore.so                    	0x0000000106709a8d TList::RecursiveRemove(TObject*) + 77 (TList.cxx:742)
11  libCore.so                    	0x0000000106707119 THashList::RecursiveRemove(TObject*) + 121 (THashList.cxx:286)
12  libCore.so                    	0x000000010669c6cc TObject::~TObject() + 76 (TObject.cxx:93)
13  libHist.so                    	0x0000000108818b1b TMultiGraph::RecursiveRemove(TObject*) + 27 (TMultiGraph.cxx:1450)
14  libCore.so                    	0x0000000106709a8d TList::RecursiveRemove(TObject*) + 77 (TList.cxx:742)
15  libGpad.so                    	0x0000000108cbff31 TPad::RecursiveRemove(TObject*) + 145 (TPad.cxx:5154)
16  libCore.so                    	0x0000000106709a8d TList::RecursiveRemove(TObject*) + 77 (TList.cxx:742)
17  libCore.so                    	0x0000000106707119 THashList::RecursiveRemove(TObject*) + 121 (THashList.cxx:286)
18  libCore.so                    	0x000000010669c6cc TObject::~TObject() + 76 (TObject.cxx:93)
19  libHist.so                    	0x0000000108818b1b TMultiGraph::RecursiveRemove(TObject*) + 27 (TMultiGraph.cxx:1450)
20  libCore.so                    	0x0000000106709a8d TList::RecursiveRemove(TObject*) + 77 (TList.cxx:742)
21  libGpad.so                    	0x0000000108cbff31 TPad::RecursiveRemove(TObject*) + 145 (TPad.cxx:5154)
22  libCore.so                    	0x0000000106709a8d TList::RecursiveRemove(TObject*) + 77 (TList.cxx:742)
23  libCore.so                    	0x0000000106707119 THashList::RecursiveRemove(TObject*) + 121 (THashList.cxx:286)
24  libCore.so                    	0x000000010669c6cc TObject::~TObject() + 76 (TObject.cxx:93)
24  libHist.so                    	0x0000000108818b1b TMultiGraph::RecursiveRemove(TObject*) + 27 (TMultiGraph.cxx:1450)
[Above 6 lines repeat 81 more times]
...

Am I incorrectly using TMultiGraph or is there a problem in the destructor?

I’ve discovered that the issue occurs when I try to use TMultiGraph::GetListOfGraphs(). I’m guessing this is a bug?

Is the bug you are seeing the same as the one recently fixed, maybe?
Please let us know if it still fails with the master branch of ROOT.

Does this segfault also appear if you don’t put it into a method?

import numpy as np
import ROOT

energies = np.logspace(-3, 1, num=5)
data1 = np.array([10, 20, 304, 50, 60])
graph = ROOT.TMultiGraph()
graph.Add(ROOT.TGraph(len(energies), energies, data1))
graph.GetListOfGraphs().At(0).SetTitle("data 1")

Because this runs without an error for me.

Alternatively, you can just set the title first:

import numpy as np
import ROOT

energies = np.logspace(-3, 1, num=5)
data1 = np.array([10, 20, 304, 50, 60])
graph = ROOT.TMultiGraph()

g = ROOT.TGraph(len(energies), energies, data1)
g.SetTitle("data 1")
graph.Add(g)

Sorry, I should have said earlier. I checked with both ROOT 6.10.04 and the master branch from yesterday.

No it does not appear, but I would like to have it in a method.

This script opened a new can of worms. I tried running it and plotting the output, but started to get errors so I modified it slightly to see what was happening:

import numpy as np
import ROOT

energies = np.linspace(-3, 1, num=5)
data1 = np.array([10, 20, 304, 50, 60])
graph = ROOT.TMultiGraph()

g = ROOT.TGraph(len(energies), energies, data1)
g.SetTitle("data 1")

for p in range(g.GetN()):
    print p, g.GetX()[p], g.GetY()[p]
graph.Add(g)

canvas = ROOT.TCanvas("canvas")
graph.Draw("AL")
canvas.Update()
raw_input("Waiting")

The output:

0 -3.0 4.94065645841e-323
1 -2.0 9.88131291682e-323
2 -1.0 1.50195956336e-321
3 0.0 2.47032822921e-322
4 1.0 2.96439387505e-322
TCanvas::ResizePad:0: RuntimeWarning: cavnas height changed from 64000 to 10

Error in <TGaxis::PaintAxis>: length of axis is 0
Waiting

The Y values are completely wrong … I am not a python expert so I cannot diagnose what is wrong in your script but it is clear the Y values of the graph are not “data1”…

Fix:

if you do

data1 = np.array([10., 20., 304., 50., 60.])

instead of:

data1 = np.array([10, 20, 304, 50, 60])

then the script works.

1 Like

Ahh ok, so TGraph doesn’t except an ndarray with dtype set for integers. Is this on purpose?

This still doesn’t resolve the original issue of the segmentation fault. Should I just consider TMultiGraph::GetListOfGraphs off limits?

TGraph accept only flot or double arrays: ROOT: TGraph Class Reference

What is this constructor then, TGraph::TGraph(Int_t n, const Int_t *x, const Int_t *y)?

oops yes …you are right … I never use it … sorry for that. So I do not know why with python it does not work. I am not an expert of the python interface,

No worries, I am far from an expert myself and have just recently started using it regularly.

@amadio Any suggestions on why TMultiGraph::GetListOfGraphs is leading to a segmentation fault or why one cannot create a TGraph from a numpy array of integers?

@ksmith No, sorry. I will try to use your script to reproduce the problem and debug what is happening. I will report back here when I find the answer. Cheers,

this should read:

energies = np.linspace(-3, 1, num=5, dtype="int32")
data1 = np.array([10, 20, 304, 50, 60], dtype="int32")

(otherwise the y-axis data is taken to be floats, hence the garbage data 4.94065645841e-323)

Thanks for the idea, what is wrong with the default int64 type? Is the issue here that one array is floats and the other is ints?

$ python
Python 2.7.14 (default, Sep 22 2017, 00:06:07) 
[GCC 4.2.1 Compatible Apple LLVM 8.1.0 (clang-802.0.42)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import numpy as np
>>> energies = np.linspace(-3, 1, num=5)
>>> data1 = np.array([10, 20, 304, 50, 60])
>>> print energies.dtype, data1.dtype
float64 int64

yep.

and in that configuration, the double* overload is selected, whereas if you use int32/int32, the correct int* overload is chosen.

overloading is complicated :slight_smile: (it’s already complicated in C++ so when you add python on top…)

having said that, it seems there is yet another issue when using float32/float32:

>>> make("float32","float32")
ene=[-3. -2. -1.  0.  1.]
dat=[  10.   20.  304.   50.   60.]
0 -1069547520.0 1092616192.0
1 -1073741824.0 1101004800.0
2 -1082130432.0 1134034944.0
3 0.0 1112014848.0
4 1065353216.0 1114636288.0

Interesting issue. Why always 32? Is there something wrong with the python default 64 (at least default for me)?

my guess (not having seen exactly how the method dispatch was done in that particular case) is that in the "float32/float32" case, the dispatch selects the int* overload based on the size of the elements (4).
and I suspect the int* overload is selected (in lieu of the float* on) because it’s the first that satisfies the “element-size” criteria.

the numbers that are printed with float32/float32 eerily look like floating point values whose byte content have been interpreted as integers that have been later on converted to floats.

Indeed. The code looks for the ‘typecode’ variable (is there on the old array interface), and when that fails, it does a size match. Since that code was written, the numpy buffer interface and the memory views of p3 have come along. I’ve said many times that that should be fixed, but there has never been enough interest to warrant the work.

(For my own nefarious purposes, I’m reworking the buffer interface in my cppyy fork … it’s not that hard … )