Make built objects survive the destruction of the interpreter

Hi, I’m building an application with an embedded TInterpreter which I use for application configuration via a Root macro. This macro creates objects which are passed to an object manager living in the host application. To be more clear, this is the relevant host code:

ObjectsManager objManager;
TInterpreter *interp = TInterpreter::Instance();
interp->ProcessLine("#include \"ObjectsManager.h\"");
std::stringstream ss;
ss << "ObjectsManager &objManager = *(ObjectsManager*) " << &objManager << ";";
interp->Execute("Configure", "");

and this is the macro:

#include "MyObject.h"
void Configure(){
  objManager.AddObject(new MyObject);

Everything seems to work fine and the ObjectManager in the host application can access the MyObject built in the macro. At this point I’d like to delete the interpreter since configuration is done and I don’t need it anymore; but if I delete interp then I get a segfault as soon as I access MyObject through the ObjectManager. I guess that when TInterpreter is deleted it frees all the memory allocated inside the macro, so that the ObjectsManager is left with a dangling pointer.
My question is: is it possible to allocate memory in the macro and make it survive the deletion of the interpreter?

I guess that when TInterpreter is deleted it frees all the memory allocated inside the macro

At most it should delete only the memory allocated on the stack and/or global objects. I.e. neither is the case in the example you shown.

As a first step, I would run your example under valgrind to get more information on where/how the ObjectsManager’s information was ‘corrupted’.


I’ve nailed down the problem, and it’s a bit more complicated than the simplified code in my original post. The segfault is triggered by these conditions:

  • A base class (MyObject) with a virtual method (GetName)
  • A derived class (DerivedObject) declared in the script; which does not override the virtual method
  • An instance of DerivedObject is built in the macro and passed back to the host application as a MyObject*
  • The interpreter is deleted
  • The virtual method is called from the host application using the MyObject* to the DerivedObject

Any of the following makes the segfault vanish:

  • Declare the method as non-virtual
  • Do not delete the interpreter
  • Build a MyObject in the script instead of a DerivedObject

I don’t understand clearly where the problem is, nor if this is a bug or not. I am not capable of debugging this, but I put together a minimal demonstrator in case someone is willing to give alook (it has some hard coded paths so it must be run from the folder where the source files are).reproducer.tar.gz (1.1 KB)

If it is really I bug I will report it on JIRA, please let me know.

A derived class (DerivedObject) declared in the script; which does not override the virtual method

This means that auxiliary information, like the virtual table is allocated by the Interpreter and goes away when the interpreter goes away … In this context deleting the interpreter is essentially the same thing as unloading the shared library implementing the class DerivedObject.

In short, if you declare any classes or functions inside the interpreter, you need to keep it alive as long as you need access to those classes and functions.


Thanks Philippe, this makes a lot of sense (very simple, once someone explains you the whole story :slightly_smiling_face:).

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.