I am training a TMultiLayerPerceptron neural network that has 11 neurons in the input and output layers (with continuous values in the interval [0,1]), with three hidden layers, each with 50 neurons. It takes more than 12 hours to go though 5000 cycles using one core.
Is there a way I can parallelize the training to use more than one core?
ROOT Version: 6.26/06 Platform: macOS Big Sur Compiler: Clang 13.0.0
Hi,
I don’t think this old class supports training in multi-core. For neural network we have new classes in TMVA for deep learning which can be trained using stochastic optimisers like ADAM and working both on GPU and multi-core.
See the tutorial TMVA_HIggs_Classification.C.
Thank you Lorenzo! This week I began working with TMVA starting with the tutorial TMVA_Classification.C using DNN_CPU and DNN_GPU, however, the code says: : Start of deep neural network training on CPU using MT, nthreads = 1
I installed the Intel TBB library as indicated in page 115 of the manual and recompiled ROOT. However, I get the same result. Is there something related with the example, or should I get more than one thread being used with this example?
Cheers,
Jeremias