TMultiLayerPerceptron Confusion

I’m new to neural networks and having examined the tutorial example I’m rather confused by the definition of layers in the constructor. The output layer is defined as ‘acolin:5:3:type’ and I don;t know why - how does the neural net know not to look for ‘type’ when not being trained? Is the output then ‘acolin’? why define ‘5:3’ hidden neurons in the output?

I’m sure I’m being very simple but I just haven’t been able to get my head round it from the documentation.

Thanks in advance


Hi Chris,
The whole network layout is defined by “acolin:5:3:type”. “acolin:5:3:type” means

  • one input node. Data (training and testing) is expected to be accessible as “acolin” in your tree.
  • one layer with 5 hidden nodes, another layer with 3 hidden nodes,
  • and finally the output layer, with only one node. For training and testing the target output values are accessible as “type” in your tree.

Once you’ve trained the network you can e.g. just save the function (that’s all a neural network really is) to a c++ file, see its Export method, and call it with any values you want, i.e. the names of the input / output nodes are irrelevant when using the network.

See also TMultiLayerPerceptron::Draw() which draws a graphical representation of the network.

Thanks for the reply,it makes much more sense now.