Importing/binding standard C++ data types on python side

Dear all,

I have a simple question about passing C++ defined variables to PyROOT.
How can I bind standard C++ data types (i.e. std::string, double, array/std::vector…) so that they are available on the python side?

I have a C++ program and I’m able to pass my user class (say MyClass) using the TPython::Bind method:

MyClass* myclass= new MyClass
TPython::Bind(myclass,"myImportedClass")

//Then I'm able to use that class within python, i.e. calling its methods.:
TPython::Exec("myclass.method()")

but what is the suggested way to import for example a string or an array?
For example:

std::string s= "my string to be imported..."
//import mystring here to myImportedString...
TPython::Exec("print myImportedString")

In case of arrays/vectors is there a way to avoid copying while passing to python?

I’m not very familiar with python programming so I’m pretty sure I’m missing something here.

Many thanks for your help,

Simone

PS: I’m using ROOT v6.06 (Ubuntu 14.04), python 2.7.6

C++ program:[code]#include “Python.h”
#include “TPython.h”

int main() {
std::string s = “my string to be imported…”;
PyObject* pystr = TPython::ObjectProxy_FromVoidPtr(&s, “std::string”);
PyObject* pymain = PyImport_ImportModule(“main”);
PyModule_AddObject(pymain, “myImportedString”, pystr);
Py_DECREF(pymain);
TPython::Exec(“print myImportedString”);
return 0;
}[/code]
cling:root [0] std::string myImportedString = "my string to be imported..."; root [1] TPython::Exec("print ROOT.myImportedString"); my string to be imported... root [2]

All is pass pointer, not copy. Note life issue.

-Dom

Dear Dominique,

many thanks for your suggestions. It worked fine. I managed also to import a 2d std::vector.

PyObject* pyvec2d= TPython::ObjectProxy_FromVoidPtr(&v,"std::vector< std::vector<float> >");
PyModule_AddObject(pymain, "mat", pyvec2d);

I also found another method while googling:

npy_intp vec_sizes[2] = {ncols,nrows};
PyObject* pyvec2d= PyArray_SimpleNewFromData(2, vec_sizes, NPY_FLOAT32, reinterpret_cast<void*>(v.data()));

but this fails with a segmentation fault, so I will follow your approach.
Sorry if I add another question. What is the suggested way to convert imported arrays or vectors into numpy arrays? I’ve seen someone converting PyObject to PyArrayObject but then?

PyArrayObject* numpyArray = reinterpret_cast<PyArrayObject*>(pyvec2d);

Many thanks again for the support,

Simone

[quote=“Simone.Riggi”]npy_intp vec_sizes[2] = {ncols,nrows}; PyObject* pyvec2d= PyArray_SimpleNewFromData(2, vec_sizes, NPY_FLOAT32, reinterpret_cast<void*>(v.data()));
but this fails with a segmentation fault[/quote]
API call want contiguous C array, but v.data() point payload of std::vector objects, these are accounting plus new payload pointer, so float data not contiguous. This general: numpy, python array, etc. all can be segmented (good for parallel, distributed, and heterogeneous compute). Need take care always.

Want change memory layout? Meaning no choice: need copy data. Not be bad: simple array access much fast than vector of vector: big price copy, small gain on access, better if many access.

Notes: 1D std::vector::data() is contiguous. 2D vector of array data() is contiguous. Issue is vector of vector.

Python understanding: all python object is C-struct with basic header plus payload. Only 3 style basic header. One style: PyObject. Make simple: cast custom type with PyObject header to PyObject*, use any Python C API. Just not touch payload!

This:PyArrayObject* numpyArray = reinterpret_cast<PyArrayObject*>(pyvec2d);only ok if pyvec2d is actual PyArrayObject. Need check PyType (is in PyObject header). No check? Ask segfault.

But! pyvec2d is ObjectProxy of wrap C++ object, not PyArrayObject. Type have other payload, meaning can not mix. Again, want change memory layout? Need copy data old layout to new layout.

Two way of copy. Python copy? Is slow but always work. C++ copy? Is fast but only work know types on compile. Here, have types, argue for C++ copy.

-Dom