Interpretation of "TMultiLayerPerceptron::Evaluate" ROOT 5.34

Dear Rooters,

I’m trying to understand how the output of “TMultiLayerPerceptron::Evaluate” should be interpreted.
The documentation says “Returns the Neural Net for a given set of input parameters #parameters must equal #input neurons.” I understand the latter. However, “return the Neural Net” sounds vague to me.

I assumed the output should be the output of the last neuron. By default, sigmoid functions are used. As a result, I expect the output to be between 0 and 1. This doesn’t seem to happen to me (the output ranges between -1 & 1).
So…1) I have made a mistake or 2) my understanding of “TMultiLayerPerceptron::Evaluate” is incorrect.


PS. I see that there are possibilities to use other functions besides sigmoids (tanh, linear, etc.). I tried to use them but it shows me “Error in TMultiLayerPerceptron::TMultiLayerPerceptron::Train(): Line search fail”

Hi Johannes,

I would recommend you to take a look at the neural network capabilities of TMVA instead. TMVA is nowadays integrated into ROOT and receives more dedicated support and provides a collection of ML methods.

Anyways, for your question:

Double_t TMultiLayerPerceptron::Evaluate(Int_t index, Double_t *params) const

This function gets the output from output node number index given input specified by params. So if you have 4 nodes in your final layer index must be between 0 and 3.

As to why the output is between -1 and 1 instead of the expected 0 to 1 I cannot give you any insight to without having a look at the actual set up.


This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.