Unfortunately you cannot simply take the code from the experimental ROOT.DeclareCppCallable in 6.22 and apply it to ROOT.Numba.Declare. We have decided to drop the feature to call in the C++ callable into the Python interpreter (as it happens with the model.predict), because it is blocked by the global interpreter lock in Python and results in a horrible runtime performance.
However, your second snippet can work with numba, see the following (runnable) example. However, only with static Python objects such as arrays.
import ROOT
data = (1, 2, 3)
@ROOT.Numba.Declare(['float', 'int'], 'float')
def model(x, i):
return x * data[i]
ROOT.gInterpreter.ProcessLine('cout << "apply data = " << Numba::model(2, 1) << endl;')
apply data = 4
Another option is to push the external data into the C++ space, simply by using a function fully implemented in C++:
import ROOT
ROOT.gInterpreter.Declare('''
static const vector<float> data = {1, 2, 3};
float func(float x, int i) {
return x * data[i];
}
''')
ROOT.gInterpreter.ProcessLine('cout << "apply data = " << func(2, 1) << endl;')
Thank you for the clarification and completely agree with the reasoning. However it might be beneficial, to still expose an interface ( e.g. ROOT.Numba.DeclareSlow ) that would be GIL blocking and let the user decided if they want to pay the price.