Question about TMultiLayerPerceptron

If I use several variables as input, can I get the weights for these variables after trainning? I mean some number to idicate the importantence of the variable in the fit. By doing so I can adjust the variable list, like removing those which are not so important to the fit.

Thanks a lot!


For the moment there is no dedicated tool for the analysis of the Neural Network after training.

As you mentionned, looking at the weights after training can give you some hints, but is can be misleading due to the non-linear behaviour of the network. A better solution is to look at the derivatives d(NN)/d(in_i) for each input neuron i. I’ll prepare some tool for that, but I don’t have time in the very short term.

To see what are the main variables, you can :

  • export the network (in C++ or Python) and look at the generated code, incorporating the weights.
  • or dump the weights to the screen

TMultiLayerPerceptron::DumpWeights("-") .

The later solution gives you a list of all the weights ordered like this :
First all neuron weights (don’t mind about those)
Then all synapse weigths.
If I depict the synapse by the input neuron index and then the index in the second layer, the order is :
11, 21, 31, …, i1, 21, 22, …, i2, …, ij, then are the weights for the other layers.
I supposed there were i input neurons and j neurons in the second layer.



I’ve started the development of an analysis class for the neural net. The ROOT CVS now contains a new TMLPAnalyzer that will give you some first hints.
Amongst others, you can have a plot of the influence on the NN output of each of the input variables. There is also a new TMultiLayerPerceptron::Draw() that shows the network structure with the weight of each synapse depicted by the width of the line.

The mlpHiggs.C tutorial has been modified accordingly to illustrate this new class.


P.S.: according to Rene, there are some problems with the neural net on some
platforms, but it will certainly run on Linux (there, I tested it).