I have found this talk (slide 14) from 5 years ago and I am particularly willing to use CNN-LSTM machine learning model on GPU. It seems this wasn’t implemented yet at that time. Were there some recent development about LSTM model for TMVA ?
That’s a question for @moneta I suppose
If needed I can surely develop some code if this is not too difficult task and if documentation is available
I have also found that it is possible to import model developed with python ml packages (Keras,etc…). How CPU/GPU featured are interfaced ?
Is it restricted to PyRoot or can this be used also with C++ root ? (I am only using C++)
As I said, @moneta will most probably guide you
The CNN and recurrent including LSTM and GRU are all supported on GPU using the native Deep Learning package. You can see example on how to use them in the tutorial
The GPU is automatically enabled if TMVA has been built with GPU support (
The tutorials show also how you can train the model using Keras or PyTorch using the TMVA interface.
Sorry for my late reply. Thank you for your help !
Side question, I have been looking recently into convolutional transformers (transformers seems to be “natural” evolution of the LSTM models) in my research. Is this also implemented in TMVA ? Working on GPU ?
Unfortunately we have not so far implemented transformers in native TMVA. We might look at implementing only for inference with SOFIE, our fast system for evaluation of deep learning models.
I will definitely work with Transformers and a team of people physicist/IA specialists in the next months.
Is there any possibility to implement this model natively both on cpu/gpu or SOFIE inference will be the new standard ?
Is there some documentation, rules or suggestions you could provide maybe to estimate the feasibility and see if we can do that ?