Building ROOT-TMVA with CUDA Error

Hi everyone, I was trying to build ROOT with CUDA option. I used tmva-dnn-dev-test branch and used cmake …/root -Dtmva=ON -DCuda=ON -Dtesting=ON && make -j4 command.

I got the following error,

[ 51%] Built target LLVMSelectionDAG
/media/ravi/Studies/GSOCProject/Test/root/tmva/tmva/src/DNN/Architectures/Cuda/Propagation.cu(143): error: declaration is incompatible with “void TMVA::DNN::TCuda::Im2col(TMVA::DNN::TCudaMatrix &, const TMVA::DNN::TCudaMatrix &, size_t, size_t, size_t, size_t, size_t, size_t, size_t, size_t)”
/media/ravi/Studies/GSOCProject/Test/build/include/TMVA/DNN/Architectures/Cuda.h(287): here
/media/ravi/Studies/GSOCProject/Test/root/tmva/tmva/src/DNN/Architectures/Cuda/Propagation.cu(254): error: declaration is incompatible with “void TMVA::DNN::TCuda::Downsample(TMVA::DNN::TCudaMatrix &, TMVA::DNN::TCudaMatrix &, const TMVA::DNN::TCudaMatrix &, int, int, int, int, int, int)”
/media/ravi/Studies/GSOCProject/Test/build/include/TMVA/DNN/Architectures/Cuda.h(357): here
/media/ravi/Studies/GSOCProject/Test/root/tmva/tmva/src/DNN/Architectures/Cuda/Propagation.cu(267): error: expected a declaration
/media/ravi/Studies/GSOCProject/Test/root/tmva/tmva/src/DNN/Architectures/Cuda/Propagation.cu(333): warning: parsing restarts here after previous syntax error
/media/ravi/Studies/GSOCProject/Test/root/tmva/tmva/src/DNN/Architectures/Cuda/RecurrentPropagation.cu(29): error: “auto” function requires a trailing return type
/media/ravi/Studies/GSOCProject/Test/root/tmva/tmva/src/DNN/Architectures/Cuda/RecurrentPropagation.cu(29): error: declaration is incompatible with “TMVA::DNN::TCuda::Matrix_t &TMVA::DNN::TCuda::RecurrentLayerBackward(TMVA::DNN::TCudaMatrix &, TMVA::DNN::TCudaMatrix &, TMVA::DNN::TCudaMatrix &, TMVA::DNN::TCudaMatrix &, TMVA::DNN::TCudaMatrix &, const TMVA::DNN::TCudaMatrix &, const TMVA::DNN::TCudaMatrix &, const TMVA::DNN::TCudaMatrix &, const TMVA::DNN::TCudaMatrix &, TMVA::DNN::TCudaMatrix &)”
/media/ravi/Studies/GSOCProject/Test/build/include/TMVA/DNN/Architectures/Cuda.h(92): here
/media/ravi/Studies/GSOCProject/Test/root/tmva/tmva/src/DNN/Architectures/Cuda/RecurrentPropagation.cu(39): error: expected a declaration
/media/ravi/Studies/GSOCProject/Test/root/tmva/tmva/src/DNN/Architectures/Cuda/RecurrentPropagation.cu(61): warning: parsing restarts here after previous syntax error
/media/ravi/Studies/GSOCProject/Test/root/tmva/tmva/src/DNN/Architectures/Cuda/RecurrentPropagation.cu(62): error: identifier “AFloat” is undefined
/media/ravi/Studies/GSOCProject/Test/root/tmva/tmva/src/DNN/Architectures/Cuda/RecurrentPropagation.cu(62): error: identifier “state_weight_gradients” is undefined
/media/ravi/Studies/GSOCProject/Test/root/tmva/tmva/src/DNN/Architectures/Cuda/RecurrentPropagation.cu(62): error: identifier “df” is undefined
/media/ravi/Studies/GSOCProject/Test/root/tmva/tmva/src/DNN/Architectures/Cuda/RecurrentPropagation.cu(62): error: identifier “state” is undefined
/media/ravi/Studies/GSOCProject/Test/root/tmva/tmva/src/DNN/Architectures/Cuda/RecurrentPropagation.cu(62): error: member function “TMVA::DNN::TCuda::TransposeMultiply [with AFloat=]” may not be redeclared outside its class
/media/ravi/Studies/GSOCProject/Test/root/tmva/tmva/src/DNN/Architectures/Cuda/RecurrentPropagation.cu(63): error: identifier “AFloat” is undefined
/media/ravi/Studies/GSOCProject/Test/root/tmva/tmva/src/DNN/Architectures/Cuda/RecurrentPropagation.cu(63): error: identifier “state_weight_gradients” is undefined
/media/ravi/Studies/GSOCProject/Test/root/tmva/tmva/src/DNN/Architectures/Cuda/RecurrentPropagation.cu(63): error: identifier “tmp” is undefined
/media/ravi/Studies/GSOCProject/Test/root/tmva/tmva/src/DNN/Architectures/Cuda/RecurrentPropagation.cu(63): error: expected a type specifier
/media/ravi/Studies/GSOCProject/Test/root/tmva/tmva/src/DNN/Architectures/Cuda/RecurrentPropagation.cu(63): error: member function “TMVA::DNN::TCuda::ScaleAdd [with AFloat=]” may not be redeclared outside its class
/media/ravi/Studies/GSOCProject/Test/root/tmva/tmva/src/DNN/Architectures/Cuda/RecurrentPropagation.cu(67): error: expected a declaration
/media/ravi/Studies/GSOCProject/Test/root/tmva/tmva/src/DNN/Architectures/Cuda/RecurrentPropagation.cu(70): warning: parsing restarts here after previous syntax error
/media/ravi/Studies/GSOCProject/Test/root/tmva/tmva/src/DNN/Architectures/Cuda/RecurrentPropagation.cu(73): error: expected a declaration
At end of source: warning: parsing restarts here after previous syntax error
18 errors detected in the compilation of “/tmp/tmpxft_000004dc_00000000-6_Cuda.cpp1.ii”.
CMake Error at dnn_cuda_generated_Cuda.cu.o.cmake:268 (message):
Error generating file
/media/ravi/Studies/GSOCProject/Test/build/tmva/tmva/CMakeFiles/dnn_cuda.dir/src/DNN/Architectures/./dnn_cuda_generated_Cuda.cu.o
tmva/tmva/CMakeFiles/dnn_cuda.dir/build.make:63: recipe for target ‘tmva/tmva/CMakeFiles/dnn_cuda.dir/src/DNN/Architectures/dnn_cuda_generated_Cuda.cu.o’ failed
make[2]: *** [tmva/tmva/CMakeFiles/dnn_cuda.dir/src/DNN/Architectures/dnn_cuda_generated_Cuda.cu.o] Error 1
make[2]: *** Waiting for unfinished jobs…
[ 51%] Built target LLVMAsmPrinter
[ 52%] Built target LLVMBitReader
[ 52%] Built target LLVMBitWriter
[ 52%] Built target LLVMGlobalISel
[ 52%] Built target LLVMInstrumentation
[ 52%] Built target LLVMInstCombine
[ 53%] Built target LLVMTransformUtils
CMakeFiles/Makefile2:31369: recipe for target ‘tmva/tmva/CMakeFiles/dnn_cuda.dir/all’ failed
make[1]: *** [tmva/tmva/CMakeFiles/dnn_cuda.dir/all] Error 2
make[1]: *** Waiting for unfinished jobs…
[ 54%] Built target LLVMipo
[ 55%] Built target LLVMScalarOpts
[ 57%] Built target LLVMCodeGen
Makefile:160: recipe for target ‘all’ failed
make: *** [all] Error 2

I would like to know how to solve this error! Any help would be appreciated.

Thanks,
Ravi Kiran S

Hi,

Which development branch is this? Since it’s a development of a new feature, error are to be expected :slight_smile:

It seems there is a syntax error in the file “Propagation.cu” at line 267. And also here “RecurrentPropagation.cu”, line 29. I would start with checking there places and continue debugging from there.

Cheers,
Kim