Home | News | Documentation | Download

Online learning in TMVA's MLP method

Hi, I’m using Root and TMVA for my final project and I’m having some trouble understanding how the MLP method learns. The TMVA user guide says that gradient descent is used for the loss function value minimisation and that online learning is used, but it doesn’t go into more detail than that. Does the method use stochastic gradient descent or does it just go through every event and adjusts the weights after processing each of them? I’m new to machine learning in general so feel free to correct any misconceptions you think I might have. Thanks in advance!

Please read tips for efficient and successful posting and posting code

ROOT Version: Not Provided
Platform: Not Provided
Compiler: Not Provided


In the MLP method I think a non stochastic standard gradient descent algorithm is used by default. You can find on the internet a lot of accurate descriptions. You have the option to use batches.
I would recommend you using the new Deep learning modules which provides different and more performant optimisers such as Adam and more suitable for large and deep architectures

Best regards