Maybe it is an easy question but I didn,t find a way to access a C pointer through a python numerical array.
For example given a function :
double * giveArray(int n);
returning a pointer to an array of size n.
In PyROOT. is it possible to “cast” its result into an array (an numpy.array actually) in order to easily perform operation on it ?
the result returned from PyROOT is an ordinary python buffer object (the only modification is the stride, by default buffer objects are of type char*); AFAIK, numpy can handle those. The only problem that I can see is that the size of the double* array per se is unknown, so “casting” by the user is required to set the size.
I don’t have a numpy installation handy, but I have anecdotal evidence that the reverse works (passing a numpy array through a double* by means of the buffer interface that PyROOT, too, accepts). Can you just give it a try?
without a size argument, the buffer size is by default set to the moral equivalent of INT_MAX, because there can be no further information derived from a blank “double *” return argument. Size of INT_MAX allows the user to access into the array as far as needed, regardless the actual size, which is now the user’s responsibility. And if numpy doesn’t allow the developer to add this information, a crash seems likely behavior.
Are fixed size arrays an option? When returned as part of struct, their size is known from the class definition and hence properly handled.
Worst comes to worst, you can first copy the double* buffer into a python array from module array of type ‘d’, and there fix the size in the built-up by using a tuned iterator. Something like (haven’t tested it):[code]import array
def mybuf__iter_( self ):
n = GetSizeFromSomeWhere()
i = 0
while i < n:
yield self[i]
i += 1
mybuf.class.iter = mybuf__iter_
a = array.array( ‘d’, mybuf )
del mybuf.class.iter[/code]
For the future, I’ll add a member function to the buffer object to fix its size. If it’s really urgent, ACLiC can probably be used to compile that in from a script, but it’ll be fugly.
I revive this thread in case some people are interested with this topic.
I just discovered an easy way of doing c pointer -> array conversion in python.
Assume we have :
Then on python side :
p=f(10)
a = numpy.ndarray( (10,),dtype= numpy.float32, buffer=p)
‘a’ will use the pointer created on the C++ side (i.e. modification on the c++ side will affect a in python).
I find this very useful since numpy array are so easy to work with.
What I still need now is to be able to do the same with std::vector … Any idea ?