I use TMultiLayerPerceptron to do the particle identification between proton and pion. When trainning, I give zero to pions and one to protons. But the trainning result seems to be shifted to -0.2 to 0.8,
is it a problem?
I add my script and root file as an attachment.
Another question is when trainning with,
mlp->Train(ntrain, “text,graph,update=10”);
if I add a “+” after update, the result will be quit different, but without the “+”, the result will be different every time I run the tranning. I read in the manual that says there is some randomization without the “+” sign, what exactly is that? Should I put “+” sign or not?
In your code, your are using -1 and 1. And it gives results from -1.3 to 0.9.
Anyway, this just means the training is not as good as you would like it to be, but there is no problem at all.
When your don’t use “+” (normal case) all weights are first randomized, and the minimization starts from there. This is therefore normal that you get a slightly different result each time. If the training goes well, all trainings from any initial condition should nevertheless converge to the same result.
You should only use the “+” if you want to start from an existing configuration (e.g. resulting from a previous training).
In two words :
[ul]
[li]“+” == continue[/li]
[li]“” == restart[/li][/ul]