I have a couple of questions about the use of TMultiLayerPerceptron within root.
Firstly, I do not understand the output it gives in the form of a plot labled differences (Impact of variables on ANN), especially as this produces a plot with no labels on the axes. Can anybody tell me how to interperate this?
Secondly, I have been trying to use the .cxx file which is produced by training the network, but I have found that I get very strange output values. I give the function the same inputs as I used to train the network to begin with, but the output values are very very large numbers, ~1000, rather than being around 0 to 1 as I would expect. Does anybody know how I might have misunderstood how to use the output code?
As stated in the documentation, it draws the distribution (on the test sample) of the impact on the network output of a small variation of each input. Units are arbitrary.
So it’s a tool to improve the network by avoiding both large dependencies (implying systematics) and small dependencies (i.e. useless input variables). A good network should have all distributions in the same range. I must admit that this is difficult to interpret.
Could you send a code showing the way you use the C++ exported class ? This should not happen since the upper limit on the network output is the final weight.