there is a clash on ClassDef that the ROOT_DICTIONARY_GENERATE use.
is there a work around for this?
this is the error i see,
opt/homebrew/lib/python3.12/site-packages/torch/include/torch/csrc/jit/frontend/tree_views.h:460:53: error: call to non-static member function without an object argument
explicit ClassDef(const TreeRef& tree) : TreeView(tree) {
^~~~
/opt/homebrew/lib/python3.12/site-packages/torch/include/torch/csrc/jit/frontend/tree_views.h:463:35: error: too few arguments provided to function-like macro invocation
explicit ClassDef(TreeRef&& tree) : TreeView(std::move(tree)) {
opening the file it seems the issue is because boot ROOT and jit::torch uses ClassDef keywords.
THank you, i did something which i am not sure is safe, but i opened the locally installed
tree_views.h from the torch installed include folders file and replaced all ClassDef → ClassDefTorch and my local code can compile…
I have no idea if what i did is legal within the TORCH jit , will report if i see some misbehaviour doing this change on a header file installed on the system
Hi @Danilo , i have not yet tested it inside a code yet, but i think a simplification of the code is the following.
I can confirm your trick on ClassDef works and the code successfully compile , i have not yet tested the functionality, but it should work. Any suggestion ( if any expert on torch on C++ read this to make things faster to compute the variables is welcome).
But, there is a good documentation to “trace” a model and dump it into loadable script in C++ in pytorch whcih i used upfront which looks like this :
example_input = torch.from_numpy(
np.array( [ [ .... example input list ] ],
dtype=np.float32)).float()
pred_val = model(example_input).squeeze().numpy()
print(pred_val)
# trace and dump
traced_model = torch.jit.trace(model, example_input)
# print ("save")
torch.jit.save(traced_model, "mymodel.pt")
Then i coded a class doing something like this
#pragma once
#include <map>
#include <ROOT/RDataFrame.hxx>
#include "ROOT/RVec.hxx"
#pragma push_macro("ClassDef")
#undef ClassDef
#include <torch/torch.h>
#include <torch/script.h>
#pragma pop_macro("ClassDef")
class MyModelAttacher{
public:
MyModelAttacher() = default;
MyModelAttacher( std::string path_model){
// Execute the model and turn off gradients
at::AutoGradMode guard(false);
model = torch::jit::load( path_model.c_str() );
} ;
double operator() ( const .... inputs columns){
std::vector<double> input_data = { ... from columns };
torch::Tensor input_tensor = torch::from_blob(
input_data.data(),
{1, nVariables Model }, // has to match it
torch::kFloat32
);
auto output = module.forward({input_tensor}).toTensor();
double results = output.squeeze();
return result;
}
private :
torch::jit::Module model; //! // ROOT ignores this member
}
I needed some dirty hacks on the CMakeLists.txt of my project on MacOs to find torch library but basically i followed this link
I don’t know if that makes sense at all, but have python-models MVA “Defineable” with RDataFrame operations sounds attractive (at least to me, and i don’t have to do some nasty gymnastic on python with numba declaring etc…
Happy to see the pragmas worked for you.
I think high performance inference from C++ is a common problem in HEP, and not only limited to RDF. I think the usecase of running some network based on input read from existing or defined columns in an analysis is more than legitimate and quite interesting - let us know how this go for you.
You might need to save your pytorch model in ONNX format, that is support in pytorch. SOFIE supports also native pytorch input models, but the support is limited and it is reccomended to use as input an ONNX model.