Training TMultiLayerPerceptron in multi-core mode

I am training a TMultiLayerPerceptron neural network that has 11 neurons in the input and output layers (with continuous values in the interval [0,1]), with three hidden layers, each with 50 neurons. It takes more than 12 hours to go though 5000 cycles using one core.
Is there a way I can parallelize the training to use more than one core?

ROOT Version: 6.26/06
Platform: macOS Big Sur
Compiler: Clang 13.0.0

I guess @moneta may help you.

I don’t think this old class supports training in multi-core. For neural network we have new classes in TMVA for deep learning which can be trained using stochastic optimisers like ADAM and working both on GPU and multi-core.
See the tutorial TMVA_HIggs_Classification.C.



1 Like

Thank you Lorenzo! This week I began working with TMVA starting with the tutorial TMVA_Classification.C using DNN_CPU and DNN_GPU, however, the code says:
: Start of deep neural network training on CPU using MT, nthreads = 1
I installed the Intel TBB library as indicated in page 115 of the manual and recompiled ROOT. However, I get the same result. Is there something related with the example, or should I get more than one thread being used with this example?

Hello Jeremiads,
To enable MT , if you have a ROOT build supporting it, you need to do:


for using all the available threads of the machine, otherwise you specify the number of threads instead of 0.


1 Like

Amazing! It works!

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.