Method kDL in TMVA

I have been playing with the new kDL method as suggested by Lorenzo @moneta some time ago. So far, I’ve got a couple of questions/comments:

a) is DropConfig no longer a suggested way to improve the DL? The setting is read into a variable called settings.dropoutProbabilities but as far as I can see, this is only used in TMVA::MethodDNN::TrainCpu() / TrainGpu() in MethodDNN.cxx, never used in MethodDL.cxx. Is this intended? I tried to use it and didn’t see any effect.

b) Parsing of the Network should be more robust. When using a “relu” layer, you will get a “TANH” layer. Only when spelling RELU in all-caps, you get RELU. Could the user at least be warned when something is misspelled? The problem is that ROOT is also trying to find “N” or “n” in the string if it isn’t any of the predefined activation functions to allow for a formula like “2*N” layers.

c) When configuring the size of the network, you have to be careful. The check if your configuration is correct is only done after reading in the trees. If I remember correctly, if you have a mismatch in BatchLayout and BatchSize of the second training phase, you might only see it after the first training phase has completed. I’d be a big user improvement if all checks could be done as early as possible.

d) The TMVAUsersGuide doesn’t mention kDL at all - so where can I find official documentation?

e) There is also good news: I am quite happy with the runtime performance on a graphics card (using a GeForce GTX 1060)! :+1:

Thank you for the valuable and extensive feed back!

Re a and d: I will defer these question to @moneta :slight_smile:
Re b and c: I agree fully with these points. Could you create JIRA tickets for these?

Re e: Glad to hear it! Thanks again!

Cheers,
Kim

Hi

Thank you for the very useful feedback. We will address it. For the droput, this is clear a bug and we will fix as soon as possible.

Best regards

Lorenzo